Re: [openstack-dev] [ceilometer]Looking for endpoint where event passed after first time

2017-04-19 Thread Hui Xiang
Thanks gordon.

I am using Mitaka version.

And yes I have known that ceilometer notification agent will listen for the
notification topics, but my question is which file/class will do it.
When I am debugging the code, at the first time when the event send out to
ceilometer exchange notification topic, EventsNotificationEndpoint  in
event/endpoint.py will handle it, however, when I send the same event
again, with log/pdb enabled, the event is not processed in
EventsNotificationEndpoint any more, and I can't find where it is done. It
looks so weird or maybe that is by design for some reason? The behavior is
same with/without definition in event_definition.yaml

So I wonder how is different for the workflow by sending same events twice.


Thanks.

On Thu, Apr 20, 2017 at 6:11 AM, gordon chung  wrote:

>
>
> On 19/04/17 03:05 PM, Hui Xiang wrote:
> > Hi folks,
> >
> >   I am posting some self-defined events to amqp ceilometer exchange
> > notification topic, during the debug session, I find that at the first
> > time, event are arriving at notification bus as the AMQPIncomingMessage,
> > and then handled by EventsNotificationEndpoint() , however, the second
> > time, neither the AMQPIncomingMessage or the EventsNotificationEndpoint
> > can see it, but it does can list from ceilometer event-list. So I wonder
> > how is the event processing with the same event type?
> >
>
> not sure what version you are using, but basic workflow since mitaka (i
> think) is the notification agent listens to 'notifications.*' topics on
> specific exchanges (including) ceilometer. for events, it attempts to
> match incoming message against known events[1]. it will build event
> based on definition. if there is no, definition, it will create a sparse
> event with a few attributes. from there it is processed according to
> pipeline[2]. you'll noticed by default it pushes to gnocchi so it won't
> get stored in ceilometer/panko storage. you can edit it accordingly.
>
> [1]
> https://github.com/openstack/ceilometer/blob/master/
> ceilometer/pipeline/data/event_definitions.yaml
> [2]
> https://github.com/openstack/ceilometer/blob/master/
> ceilometer/pipeline/data/event_pipeline.yaml
>
> cheers,
> --
> gord
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-19 Thread Joshua Harlow


Doug Hellmann wrote:

Excerpts from Clark Boylan's message of 2017-04-19 08:10:43 -0700:

On Wed, Apr 19, 2017, at 05:54 AM, Julien Danjou wrote:

Hoy,

So Gnocchi gate is all broken (agan) because it depends on "pbr" and
some new release of oslo.* depends on pbr!=2.1.0.

Neither Gnocchi nor Oslo cares about whatever bug there is in pbr 2.1.0
that got in banished by requirements Gods. It does not prevent it to be
used e.g. to install the software or get version information. But it
does break anything that is not in OpenStack because well, pip installs
the latest pbr (2.1.0) and then oslo.* is unhappy about it.

It actually breaks everything, including OpenStack. Shade and others are
affected by this as well. The specific problem here is that PBR is a
setup_requires which means it gets installed by easy_install before
anything else. This means that the requirements restrictions are not
applied to it (neither are the constraints). So you get latest PBR from
easy_install then later when something checks the requirements
(pkg_resources console script entrypoints?) they break because latest
PBR isn't allowed.

We need to stop pinning PBR and more generally stop pinning any
setup_requires (there are a few more now since setuptools itself is
starting to use that to list its deps rather than bundling them).


So I understand the culprit is probably pip installation scheme, and we
can blame him until we fix it. I'm also trying to push pbr 2.2.0 to
avoid the entire issue.

Yes, a new release of PBR undoing the "pin" is the current sane step
forward for fixing this particular issue. Monty also suggested that we
gate global requirements changes on requiring changes not pin any
setup_requires.


But for the future, could we stop updating the requirements in oslo libs
for no good reason? just because some random OpenStack project hit a bug
somewhere?

For example, I've removed requirements update on tooz¹ for more than a
year now, which did not break *anything* in the meantime, proving that
this process is giving more problem than solutions. Oslo libs doing that
automatic update introduce more pain for all consumers than anything (at
least not in OpenStack).

You are likely largely shielded by the constraints list here which is
derivative of the global requirements list. Basically by using
constraints you get distilled global requirements and even without being
part of the requirements updates you'd be shielded from breakages when
installed via something like devstack or other deployment method using
constraints.


So if we care about Oslo users outside OpenStack, I beg us to stop this
crazyness. If we don't, we'll just spend time getting rid of Oslo over
the long term…

I think we know from experience that just stopping (eg reverting to the
situation we had before requirements and constraints) would lead to
sadness. Installations would frequently be impossible due to some
unresolvable error in dependency resolution. Do you have some
alternative in mind? Perhaps we loosen the in project requirements and
explicitly state that constraints are known to work due to testing and
users should use constraints? That would give users control to manage
their own constraints list too if they wish. Maybe we do this in
libraries while continuing to be more specific in applications?


At the meeting in Austin, the requirements team accepted my proposal
to stop syncing requirements updates into projects, as described
in https://etherpad.openstack.org/p/ocata-requirements-notes

We haven't been able to find anyone to work on the implementation,
though. I is my understanding that Tony did contact the Telemetry
and Swift teams, who are most interested in this area of change,
about devoting some resources to the tasks outlined in the proposal.

Doug


My 2c,

Cheers,



Wasn't there also some decision made in austin (?) about how we as a 
group stated something along the line of co-installability isn't as 
important as it once was (and may not even be practical or what people 
care about anymore anyway)?


With kolla becoming more popular (tripleo I think is using it, and ...) 
and the containers it creates making isolated per-application 
environments it makes me wonder what of global-requirements is still 
valid (as a concept) and what isn't.


I do remember the days of free for all requirements (or requirements 
sometimes just put/stashed in devstack vs elsewhere), which I don't 
really want to go back to; but if we finally all agree that 
co-installability isn't what people actually do and/or care about 
(anymore?) then maybe we can re-think some things?


I personally still like having an ability to know some set of 
requirements works for certain project X for a given release Z (as 
tested by the gate); though I am not really concerned about if the same 
set of requirements works for certain project Y (also in release Z). If 
this is something others agree with then perhaps we just need to store 
those requirements and the 

Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-19 Thread Steve Baker
On Thu, Apr 20, 2017 at 8:12 AM, Michał Jastrzębski 
wrote:

> So after discussion started here [1] we came up with something like that:
>
> 1. Docker build will create "fingerprint" - manifesto of versions
> saved somewhere (LABEL?)
>

This would be great, especially a full package version listing in an image
label. However I don't see an easy way of populating a label from data
inside the image. Other options could be:
- have a script inside the image in a known location which generates the
package manifest on the fly, do a docker run whenever you need to get a
manifest to compare with another image.
- write out the package list during image build to a known location, do a
docker run to cat out its contents when needed

As for the format, taking a yum only image as an example would we need
anything more than the output of "rpm -qa | sort"?


> 2. We create new CLI tool kolla-registry for easier management of
> pushing and versioning
> 3. kolla-registry will be able to query existing source docker
> registry (incl. dockerhub) for latest tag-revision and it's version
> manifesto, also dest registry for tags-revisions and manifesto
> 4. if source image manifesto != dest image manifesto -> push source
> image to dest registry and increase tag-revision by 1
> 5. kolla-registry will output latest list of images:tags-revisions
> available for kolla-k8s/ansible to consume
> 6. we keep :4.0.0 style images for every tag in kolla repository.
> These are static and will not be revised.
>
>
Yes, this is fine, but please keep in mind that this change[1] could be
merged without changing these published 4.0.0 style image tags, with the
added advantage of locally built images with a git checkout of kolla have a
less ambiguous default tag.

[1] https://review.openstack.org/#/c/448380/

Different scenerios can be handled this way
> 1. Autopushing to dockerhub will query freshest built registry
> (tarballs, source) and and dockerhub (dest), it will create
> image:branchname (nova-api:ocata) for HEAD of stable branch every run
> and image:branchname-revision with revision increase
> 2. Users will have easy time managing their local registry - dockerhub
> (source) and local (dest), if nova-api:ocata on dockerhub is newer
> than local, pull it to local and increase local tip and revision
>
> Thoughts?
>

Generally positive :)


>
> [1] http://eavesdrop.openstack.org/irclogs/%23openstack-
> kolla/%23openstack-kolla.2017-04-19.log.html#t2017-04-19T19:10:25
>
> On 19 April 2017 at 10:45, Fox, Kevin M  wrote:
> > That works for detecting changes in the build system.
> >
> > It does not solve the issue of how to keep containers atomic on end user
> systems.
> >
> > All images in a k8s deployment should be the same image. This is done by
> specifying the same tag. When a new update is done, the updated deployment
> should specify a new tag to distinguish it from the old tag so that roll
> forwards/roll backs work atomically and as designed. Otherwise, roll back
> can actually break or roll forward wont actually grab newer images.
> >
> > Thanks,
> > Kevin
> >
> > 
> > From: Michał Jastrzębski [inc...@gmail.com]
> > Sent: Wednesday, April 19, 2017 8:32 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [kolla] Tags, revisions, dockerhub
> >
> > I think LABEL is great idea for all the "informative" stuff. In fact
> > if we could somehow abuse LABEL to fill it up after we get packages
> > installed, we could use it for version manifesto. That will make logic
> > around "if version changed" much easier since we'll have easy access
> > to this information on both image and container.
> >
> > Our autopushing mechanism will work with tags and HEAD of stable
> > branch in this case.
> >
> > Kevin, then your use case would be done like that:
> > 1. pull container nova-compute:ocata, tag it locally to
> > nova-compute:ocata-deployed, deploy it
> > 2. every now and then pull fresh nova-compute:ocata from dockerhub
> > 3. compare versions in LABELs to see whether you want to upgrade or not
> > 4. if you do, retag :ocata-deployed to :ocata-old, :ocata to
> > :ocata-deployed and run upgrade
> > 5. keep ocata-old, revision it, backup it for as long as you want
> >
> > I also think that we can ship utils to do this in kolla, so people
> > won't need to write these themselves.
> >
> > Does that work?
> >
> > Cheers,
> > Michal
> >
> > On 19 April 2017 at 05:02, Flavio Percoco  wrote:
> >> On 19/04/17 11:20 +0100, Paul Bourke wrote:
> >>>
> >>> I'm wondering if moving to using docker labels is a better way of
> solving
> >>> the various issue being raised here.
> >>>
> >>> We can maintain a tag for each of master/ocata/newton/etc, and on each
> >>> image have a LABEL with info such as 'pbr of service/pbr of kolla/link
> to CI
> >>> of build/etc'. I believe this solves all points Kevin mentioned except

Re: [openstack-dev] [mistral] New CI Job definitions

2017-04-19 Thread Renat Akhmerov
On 19 Apr 2017, 21:37 +0700, Brad P. Crochet , wrote:

> > On Tue, Apr 18, 2017 at 2:10 AM Ренат Ахмеров  
> > wrote:
> > > Thanks Brad!
> > >
> > > So kombu gate is now non-apache, right?
> > >
> >
> > No. It would be running under mod_wsgi. We can make it non-apache if you 
> > like. Would be pretty easy to do so.

No, that’s fine. Let’s leave it with mod_wsgi as it’s closer to most real 
production environments.

Thanks

Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [daisycloud-core] [requirements] [magnum] [oslo] Do we really need to upgrade pbr, docker-py and oslo.utils

2017-04-19 Thread Hongbin Lu
Zun required docker-py to be 1.8 or higher because older version of
docker-py didn't have the API we need. Sorry if it caused difficulties on
your side but I don't think it is feasible to downgrade the version for now
since it will affect a ton of other projects.

Best regards,
Hongbin

On Thu, Apr 20, 2017 at 12:15 AM, Steven Dake (stdake) 
wrote:

> Hu,
>
>
>
> Kolla does not manage the global requirements process as it is global to
> OpenStack.  The Kolla core reviewers essentially rubber stamp changes from
> the global requirements bot assuming they pass our gating.  If they don’t
> pass our gating, we work with the committer to sort out a working solution.
>
>
>
> Taking a look at the specific issues you raised:
>
>
>
> Pbr: https://github.com/openstack/requirements/blame/stable/
> ocata/global-requirements.txt#L158
>
> Here is the change: https://github.com/openstack/requirements/commit/
> 74a8e159e3eda7c702a39e38ab96327ba85ced3c
>
> (from the infrastructure team)
>
>
>
> Docker-py: https://github.com/openstack/requirements/blame/stable/
> ocata/global-requirements.txt#L338
>
> Here is the change: https://github.com/openstack/requirements/commit/
> 330139835347a26f435ab1262f16cf9e559f32a6
>
> (from the magnum team)
>
>
>
> oslo-utils: https://github.com/openstack/requirements/blame/
> 62383acc175b77fe7f723979cefaaca65a8d12fe/global-requirements.txt#L136
>
> https://github.com/openstack/requirements/commit/
> 510c4092f48a3a9ac7518decc5d3724df8088eb7
>
> (I am not sure which team this is – the oslo team perhaps?)
>
>
>
> I would recommend taking the changes up with the requirements team or the
> direct authors.
>
>
>
> Regards
>
> -steve
>
>
>
>
>
>
>
> *From: *"hu.zhiji...@zte.com.cn" 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Wednesday, April 19, 2017 at 8:45 PM
> *To: *"openstack-dev@lists.openstack.org"  openstack.org>
> *Subject: *[openstack-dev] [kolla] [daisycloud-core]Do we really need to
> upgrade pbr, docker-py and oslo.utils
>
>
>
> Hello,
>
>
>
> As global requirements changed in Ocata, Kolla upgrads pbr>=1.8 [1] ,
>
> docker-py>=1.8.1[2] . Besides, Kolla also starts depending on
>
> oslo.utils>=3.18.0 to use uuidutils.generate_uuid() instead of
> uuid.uuid4() to
>
> generate UUID.
>
>
>
> IMHO, Upgrading of [1] and [2] are actually not what Kolla really need to,
>
> and uuidutils.generate_uuid() is also supported by oslo.utils-3.16. I mean
>
> If we keep Kolla's requirement in Ocata as what it was in Newton, upper
> layer
>
> user of Kolla like daisycloud-core project can still keep other things
> unchanged
>
> to upgrade Kolla from stable/newton to stable/ocata. Otherwise, we have to
>
> upgrade from centos-release-openstack-newton to
>
> centos-release-openstack-ocata(we do not use pip since it conflicts with
> yum
>
> on files installed by same packages). But this kind of upgrade may be too
>
> invasive that may impacts other applications.
>
>
>
> I know that there were some discusstions about global requirements update
>
> these days. So if not really need to do these upgrades by Kolla itself, can
>
> we just keep the requirement unchanged as long as possible?
>
>
>
> My 2c.
>
>
>
> [1] https://github.com/openstack/kolla/commit/
> 2f50beb452918e37dec6edd25c53e407c6e47f53
>
> [2] https://github.com/openstack/kolla/commit/
> 85abee13ba284bb087af587b673f4e44187142da
>
> [3] https://github.com/openstack/kolla/commit/
> cee89ee8bef92914036189d02745c08894a9955b
>
>
>
>
>
>
>
>
>
>
>
> B. R.,
>
> Zhijiang
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-19 Thread Matthew Oliver
We have started this work. I've been working on:
https://review.openstack.org/#/c/444718/

Which will do requirement checks, as specified in the Pike PTG ehterpad for
Tuesday morning:
https://etherpad.openstack.org/p/relmgt-stable-requirements-ptg-pike (line
40+).

Once done, Tony and I were going to start testing it on the experimental
pipeline for Swift and Nova.

Regards,
Matt

On Thu, Apr 20, 2017 at 2:34 AM, Doug Hellmann 
wrote:

> Excerpts from Clark Boylan's message of 2017-04-19 08:10:43 -0700:
> > On Wed, Apr 19, 2017, at 05:54 AM, Julien Danjou wrote:
> > > Hoy,
> > >
> > > So Gnocchi gate is all broken (agan) because it depends on "pbr"
> and
> > > some new release of oslo.* depends on pbr!=2.1.0.
> > >
> > > Neither Gnocchi nor Oslo cares about whatever bug there is in pbr 2.1.0
> > > that got in banished by requirements Gods. It does not prevent it to be
> > > used e.g. to install the software or get version information. But it
> > > does break anything that is not in OpenStack because well, pip installs
> > > the latest pbr (2.1.0) and then oslo.* is unhappy about it.
> >
> > It actually breaks everything, including OpenStack. Shade and others are
> > affected by this as well. The specific problem here is that PBR is a
> > setup_requires which means it gets installed by easy_install before
> > anything else. This means that the requirements restrictions are not
> > applied to it (neither are the constraints). So you get latest PBR from
> > easy_install then later when something checks the requirements
> > (pkg_resources console script entrypoints?) they break because latest
> > PBR isn't allowed.
> >
> > We need to stop pinning PBR and more generally stop pinning any
> > setup_requires (there are a few more now since setuptools itself is
> > starting to use that to list its deps rather than bundling them).
> >
> > > So I understand the culprit is probably pip installation scheme, and we
> > > can blame him until we fix it. I'm also trying to push pbr 2.2.0 to
> > > avoid the entire issue.
> >
> > Yes, a new release of PBR undoing the "pin" is the current sane step
> > forward for fixing this particular issue. Monty also suggested that we
> > gate global requirements changes on requiring changes not pin any
> > setup_requires.
> >
> > > But for the future, could we stop updating the requirements in oslo
> libs
> > > for no good reason? just because some random OpenStack project hit a
> bug
> > > somewhere?
> > >
> > > For example, I've removed requirements update on tooz¹ for more than a
> > > year now, which did not break *anything* in the meantime, proving that
> > > this process is giving more problem than solutions. Oslo libs doing
> that
> > > automatic update introduce more pain for all consumers than anything
> (at
> > > least not in OpenStack).
> >
> > You are likely largely shielded by the constraints list here which is
> > derivative of the global requirements list. Basically by using
> > constraints you get distilled global requirements and even without being
> > part of the requirements updates you'd be shielded from breakages when
> > installed via something like devstack or other deployment method using
> > constraints.
> >
> > > So if we care about Oslo users outside OpenStack, I beg us to stop this
> > > crazyness. If we don't, we'll just spend time getting rid of Oslo over
> > > the long term…
> >
> > I think we know from experience that just stopping (eg reverting to the
> > situation we had before requirements and constraints) would lead to
> > sadness. Installations would frequently be impossible due to some
> > unresolvable error in dependency resolution. Do you have some
> > alternative in mind? Perhaps we loosen the in project requirements and
> > explicitly state that constraints are known to work due to testing and
> > users should use constraints? That would give users control to manage
> > their own constraints list too if they wish. Maybe we do this in
> > libraries while continuing to be more specific in applications?
>
> At the meeting in Austin, the requirements team accepted my proposal
> to stop syncing requirements updates into projects, as described
> in https://etherpad.openstack.org/p/ocata-requirements-notes
>
> We haven't been able to find anyone to work on the implementation,
> though. I is my understanding that Tony did contact the Telemetry
> and Swift teams, who are most interested in this area of change,
> about devoting some resources to the tasks outlined in the proposal.
>
> Doug
>
> >
> > >
> > > My 2c,
> > >
> > > Cheers,
> > >
> > > ¹ Unless some API changed in a dep and we needed to raise the dep,
> > > obviously.
> > >
> > > --
> > > Julien Danjou
> > > # Free Software hacker
> > > # https://julien.danjou.info
> >
> > I don't have all the answers, but am fairly certain the situation we
> > have today is better than the one from several years ago. It is just not
> > perfect. I think we are better served by refining 

Re: [openstack-dev] [kolla] [daisycloud-core] [requirements] [magnum] [oslo] Do we really need to upgrade pbr, docker-py and oslo.utils 

2017-04-19 Thread Steven Dake (stdake)
Hu,

Kolla does not manage the global requirements process as it is global to 
OpenStack.  The Kolla core reviewers essentially rubber stamp changes from the 
global requirements bot assuming they pass our gating.  If they don’t pass our 
gating, we work with the committer to sort out a working solution.

Taking a look at the specific issues you raised:

Pbr: 
https://github.com/openstack/requirements/blame/stable/ocata/global-requirements.txt#L158
Here is the change: 
https://github.com/openstack/requirements/commit/74a8e159e3eda7c702a39e38ab96327ba85ced3c
(from the infrastructure team)

Docker-py: 
https://github.com/openstack/requirements/blame/stable/ocata/global-requirements.txt#L338
Here is the change: 
https://github.com/openstack/requirements/commit/330139835347a26f435ab1262f16cf9e559f32a6
(from the magnum team)

oslo-utils: 
https://github.com/openstack/requirements/blame/62383acc175b77fe7f723979cefaaca65a8d12fe/global-requirements.txt#L136
https://github.com/openstack/requirements/commit/510c4092f48a3a9ac7518decc5d3724df8088eb7
(I am not sure which team this is – the oslo team perhaps?)

I would recommend taking the changes up with the requirements team or the 
direct authors.

Regards
-steve



From: "hu.zhiji...@zte.com.cn" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, April 19, 2017 at 8:45 PM
To: "openstack-dev@lists.openstack.org" 
Subject: [openstack-dev] [kolla] [daisycloud-core]Do we really need to upgrade 
pbr, docker-py and oslo.utils


Hello,



As global requirements changed in Ocata, Kolla upgrads pbr>=1.8 [1] ,

docker-py>=1.8.1[2] . Besides, Kolla also starts depending on

oslo.utils>=3.18.0 to use uuidutils.generate_uuid() instead of uuid.uuid4() to

generate UUID.



IMHO, Upgrading of [1] and [2] are actually not what Kolla really need to,

and uuidutils.generate_uuid() is also supported by oslo.utils-3.16. I mean

If we keep Kolla's requirement in Ocata as what it was in Newton, upper layer

user of Kolla like daisycloud-core project can still keep other things unchanged

to upgrade Kolla from stable/newton to stable/ocata. Otherwise, we have to

upgrade from centos-release-openstack-newton to

centos-release-openstack-ocata(we do not use pip since it conflicts with yum

on files installed by same packages). But this kind of upgrade may be too

invasive that may impacts other applications.



I know that there were some discusstions about global requirements update

these days. So if not really need to do these upgrades by Kolla itself, can

we just keep the requirement unchanged as long as possible?



My 2c.



[1] 
https://github.com/openstack/kolla/commit/2f50beb452918e37dec6edd25c53e407c6e47f53

[2] 
https://github.com/openstack/kolla/commit/85abee13ba284bb087af587b673f4e44187142da

[3] 
https://github.com/openstack/kolla/commit/cee89ee8bef92914036189d02745c08894a9955b











B. R.,

Zhijiang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] [daisycloud-core]Do we really need to upgrade pbr, docker-py and oslo.utils 

2017-04-19 Thread hu.zhijiang
Hello,




As global requirements changed in Ocata, Kolla upgrads pbr>=1.8 [1] , 

docker-py>=1.8.1[2] . Besides, Kolla also starts depending on 

oslo.utils>=3.18.0 to use uuidutils.generate_uuid() instead of uuid.uuid4() to

generate UUID.




IMHO, Upgrading of [1] and [2] are actually not what Kolla really need to,

and uuidutils.generate_uuid() is also supported by oslo.utils-3.16. I mean

If we keep Kolla's requirement in Ocata as what it was in Newton, upper layer

user of Kolla like daisycloud-core project can still keep other things 
unchanged 

to upgrade Kolla from stable/newton to stable/ocata. Otherwise, we have to 

upgrade from centos-release-openstack-newton to 

centos-release-openstack-ocata(we do not use pip since it conflicts with yum 

on files installed by same packages). But this kind of upgrade may be too 

invasive that may impacts other applications. 




I know that there were some discusstions about global requirements update

these days. So if not really need to do these upgrades by Kolla itself, can

we just keep the requirement unchanged as long as possible?




My 2c.




[1] 
https://github.com/openstack/kolla/commit/2f50beb452918e37dec6edd25c53e407c6e47f53

[2] 
https://github.com/openstack/kolla/commit/85abee13ba284bb087af587b673f4e44187142da

[3] 
https://github.com/openstack/kolla/commit/cee89ee8bef92914036189d02745c08894a9955b
 

















B. R.,

Zhijiang__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Development workflow for bunch of patches

2017-04-19 Thread Clark Boylan
On Wed, Apr 19, 2017, at 01:11 AM, Sławek Kapłoński wrote:
> Hello,
> 
> I have a question about how to deal with bunch of patches which depends
> one on another.
> I did patch to neutron (https://review.openstack.org/#/c/449831/
> ) which is not merged yet but I
> wanted to start also another patch which is depend on this one
> (https://review.openstack.org/#/c/457816/
> ).
> Currently I was trying to do something like:
> 1. git review -d 
> 2. git checkout -b new_branch_for_second_patch
> 3. Make second patch, commit all changes
> 4. git review <— this will ask me if I really want to push two patches to
> gerrit so I answered „yes”
> 
> Everything is easy for me as long as I’m not doing more changes in first
> patch. How I should work with it if I let’s say want to change something
> in first patch and later I want to make another change to second patch?
> IIRC when I tried to do something like that and I made „git review” to
> push changes in second patch, first one was also updated (and I lost
> changes made for this one in another branch).
> How I should work with something like that? Is there any guide about that
> (I couldn’t find such)?

The way I work is to always edit the tip of the series then "squash
back" edits as necessary.
So lets say we already have A <- B <- C and now I want to edit A and
push everything back so that it is up to date.

To do this I make a new commit such that A <- B <- C <-D then `git
rebase -i HEAD~4` and edit the rebase so that I have:

  pick A
  squash D
  pick B
  pick C

Then after rebase I end up with A' <- B' <- C' and when I git review all
three are updated properly in gerrit. The basic idea here is that you
are working on a series not a single commit so any time you make changes
you curate the entire series.

Jim Blair even wrote a tool called git-restack to make this sort of
workflow easy. You can pip install it.

Hope this helps,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Launchpad bugs cleaning day on Tuesday 4/25

2017-04-19 Thread Maciej Szankin
We are going to have a Launchpad bugs cleaning day for Nova on Tuesday 4/25.

This day will be purely about cleaning the garbage out of the Launchpad -
it is not the bug squashing day where people propose patches.

If you are not familiar with bug triage, please refer to [1].

As indicated by the dashboard [2] created by Markus, we currently have:

* 75 bugs that are inconsistent
* 44 bugs that are incomplete
* 44 bugs that are stale incomplete
* 170 bugs that are stale in progress

We also currently have 787 open bug reports (291 confirmed, 63 new), which
is a lot.
Keeping the bug queue ordered is just as important as doing reviews and
contributes
to overall development of Nova.

All help is more than welcome.

[1] https://wiki.openstack.org/wiki/Nova/BugTriage
[2] http://45.55.105.55:8082/bugs-dashboard.html

Cheers,
Maciej
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer]Looking for endpoint where event passed after first time

2017-04-19 Thread gordon chung


On 19/04/17 03:05 PM, Hui Xiang wrote:
> Hi folks,
>
>   I am posting some self-defined events to amqp ceilometer exchange
> notification topic, during the debug session, I find that at the first
> time, event are arriving at notification bus as the AMQPIncomingMessage,
> and then handled by EventsNotificationEndpoint() , however, the second
> time, neither the AMQPIncomingMessage or the EventsNotificationEndpoint
> can see it, but it does can list from ceilometer event-list. So I wonder
> how is the event processing with the same event type?
>

not sure what version you are using, but basic workflow since mitaka (i 
think) is the notification agent listens to 'notifications.*' topics on 
specific exchanges (including) ceilometer. for events, it attempts to 
match incoming message against known events[1]. it will build event 
based on definition. if there is no, definition, it will create a sparse 
event with a few attributes. from there it is processed according to 
pipeline[2]. you'll noticed by default it pushes to gnocchi so it won't 
get stored in ceilometer/panko storage. you can edit it accordingly.

[1] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/pipeline/data/event_definitions.yaml
[2] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/pipeline/data/event_pipeline.yaml

cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Gerrit maintenance Friday, April 21, 20:00-21:00 UTC

2017-04-19 Thread Jeremy Stanley
The Infra team will be taking the Gerrit service on
review.openstack.org offline briefly between 20:00 and 21:00 UTC
this Friday, April 21 to perform some pending renames of Git
repositories. We typically also take down the Zuul scheduler for our
CI system at the same time to avoid unfortunate mishaps (and
reenqueue testing for any active changes once we're done).

The actual downtime shouldn't span more than a few minutes since
most of the work can now happen with our systems up and running, but
replication to git.openstack.org and github.com will lag while
Gerrit is reindexing so any activities sensitive to that (such as
approving new release tags) should be performed either prior to the
start of the maintenance window or not until after midnight UTC just
to err on the side of caution.

As always, feel free to reply to this announcement, reach out to
us on the openstack-in...@lists.openstack.org mailing list or in the
#openstack-infra IRC channel on Freenode if you have any questions.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat] Conditionally passing properties in Heat

2017-04-19 Thread Dan Sneddon
On 04/13/2017 12:01 AM, Rabi Mishra wrote:
> On Thu, Apr 13, 2017 at 2:14 AM, Dan Sneddon  > wrote:
> 
> On 04/12/2017 01:22 PM, Thomas Herve wrote:
> > On Wed, Apr 12, 2017 at 9:00 PM, Dan Sneddon  > wrote:
> >> I'm implementing predictable control plane IPs for spine/leaf,
> and I'm
> >> running into a problem implementing this in the TripleO Heat
> templates.
> >>
> >> I have a review in progress [1] that works, but fails on upgrade,
> so I'm
> >> looking for an alternative approach. I'm trying to influence the IP
> >> address that is selected for overcloud nodes' Control Plane IP.
> Here is
> >> the current construct:
> >>
> >>   Controller:
> >> type: OS::TripleO::Server
> >> metadata:
> >>   os-collect-config:
> >> command: {get_param: ConfigCommand}
> >> properties:
> >>   image: {get_param: controllerImage}
> >>   image_update_policy: {get_param: ImageUpdatePolicy}
> >>   flavor: {get_param: OvercloudControlFlavor}
> >>   key_name: {get_param: KeyName}
> >>   networks:
> >> - network: ctlplane  # <- Here's where the port is created
> >>
> >> If I add fixed_ip: to the networks element at the end of the above, I
> >> can select an IP address from the 'ctlplane' network, like this:
> >>
> >>   networks:
> >> - network: ctlplane
> >>   fixed_ip: {get_attr: [ControlPlanePort, ip_address]}
> >>
> >> But the problem is that if I pass a blank string to fixed_ip, I
> get an
> >> error on deployment. This means that the old behavior of
> automatically
> >> selecting an IP doesn't work.
> >>
> >> I thought I has solved this by passing an external Neutron port,
> like this:
> >>
> >>   networks:
> >> - network: ctlplane
> >>   port: {get_attr: [ControlPlanePort, port_id]}
> >>
> >> Which works for deployments, but that fails on upgrades, since the
> >> original port was created as part of the Nova::Server resource,
> instead
> >> of being an external resource.
> >
> > Can you detail how it fails? I was under the impression we never
> > replaced servers no matter what (or we try to do that, at least). Is
> > the issue that your new port is not the correct one?
> >
> >> I'm now looking for a way to use Heat conditionals to apply the
> fixed_ip
> >> only if the value is not unset. Looking at the intrinsic
> functions [2],
> >> I don't see a way to do this. Is what I'm trying to do with Heat
> possible?
> >
> > You should be able to write something like that (not tested):
> >
> > networks:
> >   if:
> > - 
> > - network: ctlplane
> >   fixed_ip: {get_attr: [ControlPlanePort, ip_address]}
> > - network: ctlplane
> >
> > The question is how to define your condition. Maybe:
> >
> > conditions:
> >   fixed_ip_condition:
> >  not:
> > equals:
> >   - {get_attr: [ControlPlanePort, ip_address]}
> >   - ''
> >
> > To get back to the problem you stated first.
> >
> >
> >> Another option I'm exploring is conditionally applying resources. It
> >> appears that would require duplicating the entire TripleO::Server
> stanza
> >> in *-role.yaml so that there is one that uses fixed_ip and one
> that does
> >> not. Which one is applied would be based on a condition that tested
> >> whether fixed_ip was blank or not. The downside of that is that
> it would
> >> make the role definition confusing because there would be a large
> >> resource that was implemented twice, with only one line difference
> >> between them.
> >
> > You can define properties with conditions, so you shouldn't need to
> > rewrite everything.
> >
> 
> Thomas,
> 
> Thanks, I will try your suggestions and that should get me closer.
> 
> The full error log is available here:
> 
> http://logs.openstack.org/78/413278/11/check-tripleo/gate-tripleo-ci-centos-7-ovb-updates/8d91762/console.html
> 
> 
> 
> We do an interface_detach/attach when a port is replaced.
> It seems to be failing[1] as this is not implemented for
> ironic/baremetal driver.  I could see a patch[2] to add that
> functionality though.
> 
> [1]
> http://logs.openstack.org/78/413278/11/check-tripleo/gate-tripleo-ci-centos-7-ovb-updates/8d91762/logs/undercloud/var/log/nova/nova-compute.txt.gz#_2017-04-12_00_26_15_475
> 
> [2] https://review.openstack.org/#/c/419975/
> 
> We retry a few times to check whether the detach/attach is complete(it's
> an 

[openstack-dev] [nova] Reflections on the pike-1 milestone

2017-04-19 Thread Matt Riedemann

Hey everyone,

Now that the pike-1 milestone is behind us I wanted to have a recap of 
the milestone to compare what progress we made against goals we set at 
the PTG, and to look forward to the pike-2 milestone.


First some highlights of things accomplished in the pike-1 milestone in 
no particular order:


- Jay Pipes got the Ironic virt driver reporting custom resource classes 
into the Placement service for compute node inventory.
- There is good progress on the os-traits library and Alex Xu got the 
/traits API merged into the placement endpoint.
- Sean Dague got high-level agreement on unifying limits in Keystone 
which is a foundation for supporting hierarchical quotas.
- We merged the spec and plan for integrating Searchlight into nova-api. 
At this point that's all just spec, but it was a pretty complicated spec 
to work through and we have a plan going into pike-2.
- Sean Dague got uwsgi working in devstack now and Chris Dent is working 
on making nova-api run under uwsgi per the Pike community goal.
- Dan Smith has made good progress on enabling multi-cell support in the 
REST API and getting devstack to run and pass tests with a fleet of 
conductors. We'll be discussing this at the Forum [1].
- We merged Ildiko Vancsa's patch to remove the check_attach code from 
Nova, and we merged John Garbutt's spec for integrating the new Cinder 
attachment APIs into Nova. Progress has been made on the code for using 
the new APIs too.
- Chris Dent has been sending weekly emails giving updates on the work 
going on with placement, and Balazs Gibizer has been doing similar for 
the versioned notifications work. This has been helpful for keeping 
focus, recording decisions, and giving those outside the day-to-day 
involvement an idea of the progress made and where they can help.
- Good progress from the OSIC team on documenting the various policy 
rules [2].
- We have 62 blueprints/specs approved, 3 completed, and several with 
code up for review.


Some targets we missed in pike-1:

- We aren't as far along as we'd like to be with the counting quotas 
work, but to be fair, some of that was redone after initial review to 
make it easier to integrate. And we did approve the spec for putting a 
/usages API into placement which the quotas work will leverage.
- We don't have the additional-notification-fields-for-searchlight 
blueprint done yet. We hit some snags during review but those have been 
ironed out now, so we should be able to finish this early in pike-2.
- We never had a spec for using Cinder as an ephemeral backend. However, 
we will be discussing this at the Forum [3] so hopefully we'll have a 
plan going into Queens.
- The versioned notifications transformation has been slowing down, 
probably due to a lack of reviews.
- I never delivered a spec for deprecating personality files from the 
compute REST API (but I'm deprecating some other things from the API, so 
that counts, right?).
- We didn't merge a spec to support the concept of service-locked 
instances. There is a draft work in progress spec though to pick up in 
Queens [4].
- Little to no progress on merging the network-aware scheduling series 
which has been carried over since Newton. This is needed to support 
Neutron routed networks.
- The PowerVM driver series has not landed a single change yet due to 
lack of reviews.


Looking to pike-2 we have a few priority things to get done:

- Get a dsvm job running with nova + searchlight and start writing the 
proof of concept for searchlight integration with nova-api. The goal 
here is going to be finding out what issues we didn't anticipate in the 
spec, even though there were plenty of issues already identified in the 
spec. We will also be discussing this at the forum [5].

- Complete the additional-notification-fields-for-searchlight blueprint.
- We need to make progress on landing the counting quotas changes early 
so we can shake out any bugs introduced by that complicated change.
- Close on the plan for moving claims to the scheduler, discuss it with 
operators at the Forum [6], and make good progress on implementation by 
the end of the milestone.

- Get more of the versioned notifications work done.
- Now that the /traits API is available, we need to make progress on 
adding support for modeling shared storage pools in Placement.
- Have a multi-cell CI job running which tests the conductor fleet 
deployment model and API, including move (migrate) operations within a cell.
- Continue adding support for the new Cinder attachment APIs. We should 
have the code in place to create new-style attachments by the end of 
pike-2, and testing it with the grenade upgrade CI job. This is needed 
for supporting volume multi-attach.
- Get some of the PowerVM driver patches landed, at least through 
spawn/destroy, but ideally to the point of supporting a console.


Current focus:

- We have the summit coming up in less than three weeks. People are 
working on presentations and planning for the Forum 

Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-19 Thread Michał Jastrzębski
So after discussion started here [1] we came up with something like that:

1. Docker build will create "fingerprint" - manifesto of versions
saved somewhere (LABEL?)
2. We create new CLI tool kolla-registry for easier management of
pushing and versioning
3. kolla-registry will be able to query existing source docker
registry (incl. dockerhub) for latest tag-revision and it's version
manifesto, also dest registry for tags-revisions and manifesto
4. if source image manifesto != dest image manifesto -> push source
image to dest registry and increase tag-revision by 1
5. kolla-registry will output latest list of images:tags-revisions
available for kolla-k8s/ansible to consume
6. we keep :4.0.0 style images for every tag in kolla repository.
These are static and will not be revised.

Different scenerios can be handled this way
1. Autopushing to dockerhub will query freshest built registry
(tarballs, source) and and dockerhub (dest), it will create
image:branchname (nova-api:ocata) for HEAD of stable branch every run
and image:branchname-revision with revision increase
2. Users will have easy time managing their local registry - dockerhub
(source) and local (dest), if nova-api:ocata on dockerhub is newer
than local, pull it to local and increase local tip and revision

Thoughts?
Michal

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-kolla/%23openstack-kolla.2017-04-19.log.html#t2017-04-19T19:10:25

On 19 April 2017 at 10:45, Fox, Kevin M  wrote:
> That works for detecting changes in the build system.
>
> It does not solve the issue of how to keep containers atomic on end user 
> systems.
>
> All images in a k8s deployment should be the same image. This is done by 
> specifying the same tag. When a new update is done, the updated deployment 
> should specify a new tag to distinguish it from the old tag so that roll 
> forwards/roll backs work atomically and as designed. Otherwise, roll back can 
> actually break or roll forward wont actually grab newer images.
>
> Thanks,
> Kevin
>
> 
> From: Michał Jastrzębski [inc...@gmail.com]
> Sent: Wednesday, April 19, 2017 8:32 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [kolla] Tags, revisions, dockerhub
>
> I think LABEL is great idea for all the "informative" stuff. In fact
> if we could somehow abuse LABEL to fill it up after we get packages
> installed, we could use it for version manifesto. That will make logic
> around "if version changed" much easier since we'll have easy access
> to this information on both image and container.
>
> Our autopushing mechanism will work with tags and HEAD of stable
> branch in this case.
>
> Kevin, then your use case would be done like that:
> 1. pull container nova-compute:ocata, tag it locally to
> nova-compute:ocata-deployed, deploy it
> 2. every now and then pull fresh nova-compute:ocata from dockerhub
> 3. compare versions in LABELs to see whether you want to upgrade or not
> 4. if you do, retag :ocata-deployed to :ocata-old, :ocata to
> :ocata-deployed and run upgrade
> 5. keep ocata-old, revision it, backup it for as long as you want
>
> I also think that we can ship utils to do this in kolla, so people
> won't need to write these themselves.
>
> Does that work?
>
> Cheers,
> Michal
>
> On 19 April 2017 at 05:02, Flavio Percoco  wrote:
>> On 19/04/17 11:20 +0100, Paul Bourke wrote:
>>>
>>> I'm wondering if moving to using docker labels is a better way of solving
>>> the various issue being raised here.
>>>
>>> We can maintain a tag for each of master/ocata/newton/etc, and on each
>>> image have a LABEL with info such as 'pbr of service/pbr of kolla/link to CI
>>> of build/etc'. I believe this solves all points Kevin mentioned except
>>> rollback, which afaik, OpenStack doesn't support anyway. It also solves
>>> people's concerns with what is actually in the images, and is a standard
>>> Docker mechanism.
>>>
>>> Also as Michal mentioned, if users are concerned about keeping images,
>>> they can tag and stash them away themselves. It is overkill to maintain
>>> hundreds of (imo meaningless) tags in a registry, the majority of which
>>> people don't care about - they only want the latest of the branch they're
>>> deploying.
>>>
>>> Every detail of a running Kolla system can be easily deduced by scanning
>>> across nodes and printing the labels of running containers, functionality
>>> which can be shipped by Kolla. There are also methods for fetching labels of
>>> remote images[0][1] for users wishing to inspect what they are upgrading to.
>>>
>>> [0] https://github.com/projectatomic/skopeo
>>> [1] https://github.com/docker/distribution/issues/1252
>>
>>
>>
>> You beat me to it, Paul.
>>
>> I think using lables to communicate the version of each openstack software
>> installed in the image is the way to go here. We're looking into doing this
>> ourselves as part of the RDO pipeline and 

Re: [openstack-dev] [tc][elections]questions about one platform vision

2017-04-19 Thread Adam Lawson
I appreciate the remarks.

I think we are perhaps looking at early data and discussing two separate
things: events versus trends. While I do not doubt K8S has been deployed on
OpenStack, I'm looking at how folks are planning to use those two
platforms. Is it possible to host one in another, absolutely. Is that
supportable at scale or discussed as a serious possibility. Rarely. Where I
believe things are going based on conversations and numerous roadmap
strategy sessions. literally no one I'm talking to talks about the combo as
making sense from a scale perspective whether it be too-big-to-fail banks,
network companies or SaaS companies. Again, this is my perception based on
the folks I'm taking to. That's not where I see the market shifting. And
that's certainly not what I see enterprises doing or planning to go.

On my end I'm seeing many in the OpenStack community falling into a
different trap - believing nothing needs to change to accommodate a
significant new element in the market or to plan a vision for the project
with a misunderstanding of it's place in the FOSS marketplace. The two
platforms do in fact compete as I see things as they are today - and with
increasing interest in orchestrating VM's with K8S, that competition will
likely become more distinct and OpenStack will face a very new
potentiality: OpenStack being considered versus something else. OpenStack
has been IT and the idea of a viable alternate hasn't happened for at least
5 years and I see K8S as a real potential challenger.

But again, everything may change next week and we'll all be wrong. ; )


*Adam Lawson*

Principal Architect
Office: +1-916-794-5706

On Wed, Apr 19, 2017 at 5:14 AM, Flavio Percoco  wrote:

> On 19/04/17 11:17 +0200, Thierry Carrez wrote:
>
>> Adam Lawson wrote:
>>
>>> [...]
>>> I've been an OpenStack architect for at least 5+ years now and work with
>>> many large Fortune 100 IT shops. OpenStack in the enterprise is being
>>> used to orchestrate virtual machines. Despite the additional
>>> capabilities OpenStack is trying to accommodate, that's basically it. At
>>> scale, that's what they're doing. Not many are orchestrating bare metal
>>> that I've seen or heard. And they are exploring K8s and Docker Swarm to
>>> orchestrate containers. They aren't looking at OpenStack to do that.
>>>
>>
>> I have to disagree. We have evidence that some of the largest Kubernetes
>> deployments in the world happen on top of an OpenStack infrastructure,
>> and hopefully some of those will talk about it in Boston.
>>
>> I feel like you fall in the common trap of thinking that both
>> technologies are competing, while one is designed for infrastructure
>> providers and the other for application deployers. Sure, you can be a
>> Kubernetes-only shop if you're small enough or have Google-like
>> discipline (and a lot of those shops, unsurprisingly, were present in
>> Berlin), but most companies have to offer a wider array of
>> infrastructure services for their developers. That's where OpenStack, an
>> open infrastructure stack, comes in. Giving the infrastructure provider
>> a framework to offer multiple options to application developers and
>> operators.
>>
>
>
> Yes, this, a gazillion of times. I do _NOT_ think CNCF and OpenStack are
> (or
> need to be) in competition and I'd rather explore the different ways we can
> combine these 2 communities or, more specifically, some of the
> technologies that
> are part of these communities.
>
> To do this, we need to explore ways to make OpenStack more "flexible" so
> that we
> can allow different combinations of OpenStack, we need to allow people to
> use it
> more like a framework.
>
> I definitely don't mean it's the only thing and I'm really against calling
> almost anything "the one thing" (unless we're talking about pasta or
> pizza) and
> I believe falling into that trap would damage the community (we barely
> made it
> out in our early years/days).
>
>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Development workflow for bunch of patches

2017-04-19 Thread Ihar Hrachyshka
Sometimes it's possible to avoid the stacking, and instead just rely
on Depends-On (usually in those cases where patches touch completely
different files). In that way, you won't need to restack on each
dependency respin. (But you may want to recheck to get fresh results.)
Of course it won't work as you may expect in local git checkout
because it doesn't know about Depends-On tags, so it's of limited
application.

Ihar

On Wed, Apr 19, 2017 at 1:11 AM, Sławek Kapłoński  wrote:
> Hello,
>
> I have a question about how to deal with bunch of patches which depends one
> on another.
> I did patch to neutron (https://review.openstack.org/#/c/449831/) which is
> not merged yet but I wanted to start also another patch which is depend on
> this one (https://review.openstack.org/#/c/457816/).
> Currently I was trying to do something like:
> 1. git review -d 
> 2. git checkout -b new_branch_for_second_patch
> 3. Make second patch, commit all changes
> 4. git review <— this will ask me if I really want to push two patches to
> gerrit so I answered „yes”
>
> Everything is easy for me as long as I’m not doing more changes in first
> patch. How I should work with it if I let’s say want to change something in
> first patch and later I want to make another change to second patch? IIRC
> when I tried to do something like that and I made „git review” to push
> changes in second patch, first one was also updated (and I lost changes made
> for this one in another branch).
> How I should work with something like that? Is there any guide about that (I
> couldn’t find such)?
>
> —
> Best regards
> Slawek Kaplonski
> sla...@kaplonski.pl
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Taskflow] Current state or the project ?

2017-04-19 Thread Eric Fried
Robin-

> Others (with a slightly less bias than I might have, haha) though I 
> think should chime in on there experiences :)

I can tell you we've been using TaskFlow in some fairly nontrivial ways
in the PowerVM compute driver [1][2][3] and pypowervm [4], the library
that supports it.  We've found it to be a boon, especially for automated
cleanup (via revert() chains) when something goes wrong.  Doing this
kind of workflow management is inherently complicated, but we find
TaskFlow makes it about as straightforward as we could reasonably expect
it to be.

Good luck.

Eric Fried (efried)

[1]
https://github.com/openstack/nova-powervm/tree/stable/ocata/nova_powervm/virt/powervm/tasks
[2]
https://github.com/openstack/nova-powervm/blob/stable/ocata/nova_powervm/virt/powervm/driver.py#L380
[3]
https://github.com/openstack/nova-powervm/blob/stable/ocata/nova_powervm/virt/powervm/driver.py#L567
[4]
https://github.com/powervm/pypowervm/blob/release/1.1.2/pypowervm/utils/transaction.py#L498

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Emails for OpenStack R Release Name voting going out - please be patient

2017-04-19 Thread Jay S Bryant

All,

For those of you haven't received an e-mail, check the inbox you use for 
Gerrit.  You can verify what that is by going to review.openstack.org , 
click your name, go to settings, the e-mail address is set there.


The naming vote and the TC vote e-mails got lost in that inbox for me.

Hopes this helps.

Jay



On 4/12/2017 7:09 AM, Dulko, Michal wrote:

On Wed, 2017-04-12 at 06:57 -0500, Monty Taylor wrote:

On 04/06/2017 07:34 AM, Monty Taylor wrote:

Hey all!

I've started the R Release Name poll and currently am submitting
everyone's email address to the system. In order to not make our fine
friends at Carnegie Mellon (the folks who run the CIVS voting service)
upset, I have a script that submits the emails one at a time with a
half-second delay between each email. That means at best, since there
are 40k people to process it'll take ~6 hours for them all to go out.

Which is to say - emails are on their way - but if you haven't gotten
yours yet, that's fine. I'll send another email when they've all gone
out, so don't worry about not receiving one until I've sent that mail.

Well- that took longer than I expected. Because of some rate limiting,
1/2 second delay was not long enough...

Anyway - all of the emails should have gone out now. Because that took
so long, I'm going to hold the poll open until next Wednesday.

Monty

Not sure why, but I haven't received an email yet.

Thanks,
Michal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][all] Core mentorship program kickoff

2017-04-19 Thread Emilien Macchi
On Wed, Apr 19, 2017 at 1:22 PM, Michał Jastrzębski  wrote:
> Hello everyone,
>
> On todays meeting we officially started mentorship program in Kolla:)
> If you are core or you are interested in becoming one, please sign up
> on this etherpad
>
> https://etherpad.openstack.org/p/kolla-mentorship-signup
>
> Idea is to provide safe environment to ask questions, get feedback
> from trusted person in core team and ultimately join core team.
>
> Role of mentor is:
> 1. Make sure to review changes that your student reviewed, providing
> feedback to his/hers review as well
> 2. Review changes your student proposed
> 3. Answer questions about review process, technical issues and stuff like that
> 4. Be a trusted friend in community:)
> 5. Ultimately, when you decide that your student is ready, feel free
> to kick off voting process for core addition or let me know, I'll do
> it for you
>
> Role of student:
> 1. Review, review, review, your voice counts
> 2. Don't be shy to ask your mentor, either openly or privately
> 3. Care for project en large, care for code and community, it's your
> project and someday you might be mentoring another person:)
>
> I encourage everyone to take part in this program! This is just a
> pilot, we're figuring it out as we go so help us evolve this effort
> and maybe make it more cross-community:)

I just find it very cool and big +1 on cross-community effort if it works.

> Regards,
> Michal
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer]Looking for endpoint where event passed after first time

2017-04-19 Thread Hui Xiang
Hi folks,

  I am posting some self-defined events to amqp ceilometer exchange
notification topic, during the debug session, I find that at the first
time, event are arriving at notification bus as the AMQPIncomingMessage,
and then handled by EventsNotificationEndpoint() , however, the second
time, neither the AMQPIncomingMessage or the EventsNotificationEndpoint can
see it, but it does can list from ceilometer event-list. So I wonder how is
the event processing with the same event type?

Thanks in advance!

BR.
Hui.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Taskflow] Current state or the project ?

2017-04-19 Thread Joshua Harlow

Robin De-Lillo wrote:

Hello Guys,

I'm Robin a Software developer for a VFX company based in Canada. As the
company grow up, we are currently looking into redesigning our internal
processes and workflows in a more nodal/graph based approach.

Ideally we would like to start from an existing library so we don't
re-implement things from scratch. We found out TaskFlow which, after a
couple of tests, looks very promising to us. Good work with that !!

We were wondering what is the current state of this project ? Is that
still something under active development or a priority for OpenStack ?
As we would definitely be happy to contribute to this library in the
future, we are just gathering information around for now to ensure we
pick up the best solution which suit our needs.

Thanks a lot,
Robin De Lillo



Hi there!

So what you describe seems like a good fit for taskflow, since its 
engine concept is really based on the key concept of 'nodal/graph based 
approach.' (ie the engine[1] really is just a bunch of code around graph 
traversal in various orders depending on task execution, using the 
futures concept/paradigm, and results and such).


Any way we can get more details on what you might want to be doing. That 
might help further distill if it's a good fit or not. If you can't say 
that's ok to (depends on the project/company and all that).


So about the current state.

It's still alive, development has slowed a little (in that I haven't 
been active as much after I moved to godaddy, where I'm helping revamp 
some of there deployment, automation... and operational aspects of 
openstack itself); but it still IMHO gets fixes and I'm more than 
willing and able to help folks out in learning some stuff. So I wouldn't 
say super-active, but ongoing as needed (which I think is somewhat 
common for more of oslo than I would like to admit); though don't take 
that negatively :)


Others (with a slightly less bias than I might have, haha) though I 
think should chime in on there experiences :)


The question around 'priority for OpenStack', that's a tough one, 
because I think the priorities of OpenStack are sort of end-user / 
deployer/operator ... defined, so it's slightly hard to identify what 
they are (besides 'make OpenStack great again', lol).


What other solutions are you thinking of/looking at/considering?

Typically what I've seen are celery, RQ (redis) and probably a few 
others that I listed once @ 
https://docs.openstack.org/developer/taskflow/shelf.html#libraries-frameworks 
(all of these share similar 'aspects' as taskflow, to some degree).


That's my 3 cents ;)

-Josh

[1] https://docs.openstack.org/developer/taskflow/engines.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Taskflow] Current state or the project ?

2017-04-19 Thread Robin De-Lillo
Hello Guys,

I'm Robin a Software developer for a VFX company based in Canada. As the
company grow up, we are currently looking into redesigning our internal
processes and workflows in a more nodal/graph based approach.

Ideally we would like to start from an existing library so we don't
re-implement things from scratch. We found out TaskFlow which, after a
couple of tests, looks very promising to us. Good work with that !!

We were wondering what is the current state of this project ? Is that still
something under active development or a priority for OpenStack ? As we
would definitely be happy to contribute to this library in the future, we
are just gathering information around for now to ensure we pick up the best
solution which suit our needs.

Thanks a lot,
Robin De Lillo
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-19 Thread Fox, Kevin M
That works for detecting changes in the build system.

It does not solve the issue of how to keep containers atomic on end user 
systems.

All images in a k8s deployment should be the same image. This is done by 
specifying the same tag. When a new update is done, the updated deployment 
should specify a new tag to distinguish it from the old tag so that roll 
forwards/roll backs work atomically and as designed. Otherwise, roll back can 
actually break or roll forward wont actually grab newer images.

Thanks,
Kevin


From: Michał Jastrzębski [inc...@gmail.com]
Sent: Wednesday, April 19, 2017 8:32 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

I think LABEL is great idea for all the "informative" stuff. In fact
if we could somehow abuse LABEL to fill it up after we get packages
installed, we could use it for version manifesto. That will make logic
around "if version changed" much easier since we'll have easy access
to this information on both image and container.

Our autopushing mechanism will work with tags and HEAD of stable
branch in this case.

Kevin, then your use case would be done like that:
1. pull container nova-compute:ocata, tag it locally to
nova-compute:ocata-deployed, deploy it
2. every now and then pull fresh nova-compute:ocata from dockerhub
3. compare versions in LABELs to see whether you want to upgrade or not
4. if you do, retag :ocata-deployed to :ocata-old, :ocata to
:ocata-deployed and run upgrade
5. keep ocata-old, revision it, backup it for as long as you want

I also think that we can ship utils to do this in kolla, so people
won't need to write these themselves.

Does that work?

Cheers,
Michal

On 19 April 2017 at 05:02, Flavio Percoco  wrote:
> On 19/04/17 11:20 +0100, Paul Bourke wrote:
>>
>> I'm wondering if moving to using docker labels is a better way of solving
>> the various issue being raised here.
>>
>> We can maintain a tag for each of master/ocata/newton/etc, and on each
>> image have a LABEL with info such as 'pbr of service/pbr of kolla/link to CI
>> of build/etc'. I believe this solves all points Kevin mentioned except
>> rollback, which afaik, OpenStack doesn't support anyway. It also solves
>> people's concerns with what is actually in the images, and is a standard
>> Docker mechanism.
>>
>> Also as Michal mentioned, if users are concerned about keeping images,
>> they can tag and stash them away themselves. It is overkill to maintain
>> hundreds of (imo meaningless) tags in a registry, the majority of which
>> people don't care about - they only want the latest of the branch they're
>> deploying.
>>
>> Every detail of a running Kolla system can be easily deduced by scanning
>> across nodes and printing the labels of running containers, functionality
>> which can be shipped by Kolla. There are also methods for fetching labels of
>> remote images[0][1] for users wishing to inspect what they are upgrading to.
>>
>> [0] https://github.com/projectatomic/skopeo
>> [1] https://github.com/docker/distribution/issues/1252
>
>
>
> You beat me to it, Paul.
>
> I think using lables to communicate the version of each openstack software
> installed in the image is the way to go here. We're looking into doing this
> ourselves as part of the RDO pipeline and it'd be awesome to have it being
> part
> of kolla-build itself. Steve Baker, I believe, was working on this.
>
> The more explicit we are about the contents of the image, the better. People
> want to know what's in there, rather than assuming based on the tag.
>
> Flavio
>
>
>> -Paul
>>
>> On 18/04/17 22:10, Michał Jastrzębski wrote:
>>>
>>> On 18 April 2017 at 13:54, Doug Hellmann  wrote:

 Excerpts from Michał Jastrzębski's message of 2017-04-18 13:37:30 -0700:
>
> On 18 April 2017 at 12:41, Doug Hellmann  wrote:
>>
>> Excerpts from Steve Baker's message of 2017-04-18 10:46:43 +1200:
>>>
>>> On Tue, Apr 18, 2017 at 9:57 AM, Doug Hellmann
>>> 
>>> wrote:
>>>
 Excerpts from Michał Jastrzębski's message of 2017-04-12 15:59:34
 -0700:
>
> My dear Kollegues,
>
> Today we had discussion about how to properly name/tag images being
> pushed to dockerhub. That moved towards general discussion on
> revision
> mgmt.
>
> Problem we're trying to solve is this:
> If you build/push images today, your tag is 4.0
> if you do it tomorrow, it's still 4.0, and will keep being 4.0
> until
> we tag new release.
>
> But image built today is not equal to image built tomorrow, so we
> would like something like 4.0.0-1, 4.0.0-2.
> While we can reasonably detect history of revisions in dockerhub,
> local env will be extremely hard to do.

Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-19 Thread Fox, Kevin M
one other thing. checksums also do not relay any information about 
newer/older'ness. Only that a change happened.


From: Britt Houser (bhouser) [bhou...@cisco.com]
Sent: Wednesday, April 19, 2017 3:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

I agree with Paul here.  I like the idea of solving this with labels instead of 
tags.  A label is imbedded into the docker image, and if it changes, the 
checksum of the image changes.  A tag is kept in the image manifest, and can be 
altered w/o changing the underlying image.  So to me a label is better IMHO, 
b/c it preserves this data within the image itself in a manner which is easy to 
detect if its been altered.

thx,
britt


From: Paul Bourke 
Sent: Apr 19, 2017 6:28 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

I'm wondering if moving to using docker labels is a better way of
solving the various issue being raised here.

We can maintain a tag for each of master/ocata/newton/etc, and on each
image have a LABEL with info such as 'pbr of service/pbr of kolla/link
to CI of build/etc'. I believe this solves all points Kevin mentioned
except rollback, which afaik, OpenStack doesn't support anyway. It also
solves people's concerns with what is actually in the images, and is a
standard Docker mechanism.

Also as Michal mentioned, if users are concerned about keeping images,
they can tag and stash them away themselves. It is overkill to maintain
hundreds of (imo meaningless) tags in a registry, the majority of which
people don't care about - they only want the latest of the branch
they're deploying.

Every detail of a running Kolla system can be easily deduced by scanning
across nodes and printing the labels of running containers,
functionality which can be shipped by Kolla. There are also methods for
fetching labels of remote images[0][1] for users wishing to inspect what
they are upgrading to.

[0] https://github.com/projectatomic/skopeo
[1] https://github.com/docker/distribution/issues/1252

-Paul

On 18/04/17 22:10, Michał Jastrzębski wrote:
> On 18 April 2017 at 13:54, Doug Hellmann  wrote:
>> Excerpts from Michał Jastrzębski's message of 2017-04-18 13:37:30 -0700:
>>> On 18 April 2017 at 12:41, Doug Hellmann  wrote:
 Excerpts from Steve Baker's message of 2017-04-18 10:46:43 +1200:
> On Tue, Apr 18, 2017 at 9:57 AM, Doug Hellmann 
> wrote:
>
>> Excerpts from Michał Jastrzębski's message of 2017-04-12 15:59:34 -0700:
>>> My dear Kollegues,
>>>
>>> Today we had discussion about how to properly name/tag images being
>>> pushed to dockerhub. That moved towards general discussion on revision
>>> mgmt.
>>>
>>> Problem we're trying to solve is this:
>>> If you build/push images today, your tag is 4.0
>>> if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
>>> we tag new release.
>>>
>>> But image built today is not equal to image built tomorrow, so we
>>> would like something like 4.0.0-1, 4.0.0-2.
>>> While we can reasonably detect history of revisions in dockerhub,
>>> local env will be extremely hard to do.
>>>
>>> I'd like to ask you for opinions on desired behavior and how we want
>>> to deal with revision management in general.
>>>
>>> Cheers,
>>> Michal
>>>
>>
>> What's in the images, kolla? Other OpenStack components?
>
>
> Yes, each image will typically contain all software required for one
> OpenStack service, including dependencies from OpenStack projects or the
> base OS. Installed via some combination of git, pip, rpm, deb.
>
>> Where does the
>> 4.0.0 come from?
>>
>>
> Its the python version string from the kolla project itself, so ultimately
> I think pbr. I'm suggesting that we switch to using the
> version.release_string[1] which will tag with the longer version we use 
> for
> other dev packages.
>
> [1]https://review.openstack.org/#/c/448380/1/kolla/common/config.py

 Why are you tagging the artifacts containing other projects with the
 version number of kolla, instead of their own version numbers and some
 sort of incremented build number?
>>>
>>> This is what we do in Kolla and I'd say logistics and simplicity of
>>> implementation. Tags are more than just information for us. We have to
>>
>> But for a user consuming the image, they have no idea what version of
>> nova is in it because the version on the image is tied to a different
>> application entirely.
>
> That's easy enough to check tho (just docker exec into container and
> do pip freeze). On the other hand you'll have information that "this
> 

Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-19 Thread Fox, Kevin M
K8s cant pull containers based on labels. I think labels are maybe a good way 
of storing container fingerprints though.

Thanks,
Kevin

From: Britt Houser (bhouser) [bhou...@cisco.com]
Sent: Wednesday, April 19, 2017 3:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

I agree with Paul here.  I like the idea of solving this with labels instead of 
tags.  A label is imbedded into the docker image, and if it changes, the 
checksum of the image changes.  A tag is kept in the image manifest, and can be 
altered w/o changing the underlying image.  So to me a label is better IMHO, 
b/c it preserves this data within the image itself in a manner which is easy to 
detect if its been altered.

thx,
britt


From: Paul Bourke 
Sent: Apr 19, 2017 6:28 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

I'm wondering if moving to using docker labels is a better way of
solving the various issue being raised here.

We can maintain a tag for each of master/ocata/newton/etc, and on each
image have a LABEL with info such as 'pbr of service/pbr of kolla/link
to CI of build/etc'. I believe this solves all points Kevin mentioned
except rollback, which afaik, OpenStack doesn't support anyway. It also
solves people's concerns with what is actually in the images, and is a
standard Docker mechanism.

Also as Michal mentioned, if users are concerned about keeping images,
they can tag and stash them away themselves. It is overkill to maintain
hundreds of (imo meaningless) tags in a registry, the majority of which
people don't care about - they only want the latest of the branch
they're deploying.

Every detail of a running Kolla system can be easily deduced by scanning
across nodes and printing the labels of running containers,
functionality which can be shipped by Kolla. There are also methods for
fetching labels of remote images[0][1] for users wishing to inspect what
they are upgrading to.

[0] https://github.com/projectatomic/skopeo
[1] https://github.com/docker/distribution/issues/1252

-Paul

On 18/04/17 22:10, Michał Jastrzębski wrote:
> On 18 April 2017 at 13:54, Doug Hellmann  wrote:
>> Excerpts from Michał Jastrzębski's message of 2017-04-18 13:37:30 -0700:
>>> On 18 April 2017 at 12:41, Doug Hellmann  wrote:
 Excerpts from Steve Baker's message of 2017-04-18 10:46:43 +1200:
> On Tue, Apr 18, 2017 at 9:57 AM, Doug Hellmann 
> wrote:
>
>> Excerpts from Michał Jastrzębski's message of 2017-04-12 15:59:34 -0700:
>>> My dear Kollegues,
>>>
>>> Today we had discussion about how to properly name/tag images being
>>> pushed to dockerhub. That moved towards general discussion on revision
>>> mgmt.
>>>
>>> Problem we're trying to solve is this:
>>> If you build/push images today, your tag is 4.0
>>> if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
>>> we tag new release.
>>>
>>> But image built today is not equal to image built tomorrow, so we
>>> would like something like 4.0.0-1, 4.0.0-2.
>>> While we can reasonably detect history of revisions in dockerhub,
>>> local env will be extremely hard to do.
>>>
>>> I'd like to ask you for opinions on desired behavior and how we want
>>> to deal with revision management in general.
>>>
>>> Cheers,
>>> Michal
>>>
>>
>> What's in the images, kolla? Other OpenStack components?
>
>
> Yes, each image will typically contain all software required for one
> OpenStack service, including dependencies from OpenStack projects or the
> base OS. Installed via some combination of git, pip, rpm, deb.
>
>> Where does the
>> 4.0.0 come from?
>>
>>
> Its the python version string from the kolla project itself, so ultimately
> I think pbr. I'm suggesting that we switch to using the
> version.release_string[1] which will tag with the longer version we use 
> for
> other dev packages.
>
> [1]https://review.openstack.org/#/c/448380/1/kolla/common/config.py

 Why are you tagging the artifacts containing other projects with the
 version number of kolla, instead of their own version numbers and some
 sort of incremented build number?
>>>
>>> This is what we do in Kolla and I'd say logistics and simplicity of
>>> implementation. Tags are more than just information for us. We have to
>>
>> But for a user consuming the image, they have no idea what version of
>> nova is in it because the version on the image is tied to a different
>> application entirely.
>
> That's easy enough to check tho (just docker exec into container and
> do pip freeze). On the other hand you'll have 

Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-19 Thread Fox, Kevin M
I'm not saying we keep around hundreds of tags, just that our tags have enough 
meaning that user systems can distinguish when something changes.

We can keep the most N revisions of the container around for some value of N. 
Older containers just get deleted from the hub. No reason to keep them around.

kubernetes uses tags with a deployment to do atomic rolling upgrades/rollbacks. 
There isn't very many other mechanisms it supports.

When I'm talking about roll forward/back, I'm not talking between major 
versions, as I do realize those are unsupported. What I'm talking about is roll 
forward/back of containers within the same single version. User applies 
security updates. stuff breaks. user rolls back to the container right before 
the security update, then works with upstream to fix the breakage before 
rolling forward again.

Thanks,
Kevin


From: Paul Bourke [paul.bou...@oracle.com]
Sent: Wednesday, April 19, 2017 3:20 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

I'm wondering if moving to using docker labels is a better way of
solving the various issue being raised here.

We can maintain a tag for each of master/ocata/newton/etc, and on each
image have a LABEL with info such as 'pbr of service/pbr of kolla/link
to CI of build/etc'. I believe this solves all points Kevin mentioned
except rollback, which afaik, OpenStack doesn't support anyway. It also
solves people's concerns with what is actually in the images, and is a
standard Docker mechanism.

Also as Michal mentioned, if users are concerned about keeping images,
they can tag and stash them away themselves. It is overkill to maintain
hundreds of (imo meaningless) tags in a registry, the majority of which
people don't care about - they only want the latest of the branch
they're deploying.

Every detail of a running Kolla system can be easily deduced by scanning
across nodes and printing the labels of running containers,
functionality which can be shipped by Kolla. There are also methods for
fetching labels of remote images[0][1] for users wishing to inspect what
they are upgrading to.

[0] https://github.com/projectatomic/skopeo
[1] https://github.com/docker/distribution/issues/1252

-Paul

On 18/04/17 22:10, Michał Jastrzębski wrote:
> On 18 April 2017 at 13:54, Doug Hellmann  wrote:
>> Excerpts from Michał Jastrzębski's message of 2017-04-18 13:37:30 -0700:
>>> On 18 April 2017 at 12:41, Doug Hellmann  wrote:
 Excerpts from Steve Baker's message of 2017-04-18 10:46:43 +1200:
> On Tue, Apr 18, 2017 at 9:57 AM, Doug Hellmann 
> wrote:
>
>> Excerpts from Michał Jastrzębski's message of 2017-04-12 15:59:34 -0700:
>>> My dear Kollegues,
>>>
>>> Today we had discussion about how to properly name/tag images being
>>> pushed to dockerhub. That moved towards general discussion on revision
>>> mgmt.
>>>
>>> Problem we're trying to solve is this:
>>> If you build/push images today, your tag is 4.0
>>> if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
>>> we tag new release.
>>>
>>> But image built today is not equal to image built tomorrow, so we
>>> would like something like 4.0.0-1, 4.0.0-2.
>>> While we can reasonably detect history of revisions in dockerhub,
>>> local env will be extremely hard to do.
>>>
>>> I'd like to ask you for opinions on desired behavior and how we want
>>> to deal with revision management in general.
>>>
>>> Cheers,
>>> Michal
>>>
>>
>> What's in the images, kolla? Other OpenStack components?
>
>
> Yes, each image will typically contain all software required for one
> OpenStack service, including dependencies from OpenStack projects or the
> base OS. Installed via some combination of git, pip, rpm, deb.
>
>> Where does the
>> 4.0.0 come from?
>>
>>
> Its the python version string from the kolla project itself, so ultimately
> I think pbr. I'm suggesting that we switch to using the
> version.release_string[1] which will tag with the longer version we use 
> for
> other dev packages.
>
> [1]https://review.openstack.org/#/c/448380/1/kolla/common/config.py

 Why are you tagging the artifacts containing other projects with the
 version number of kolla, instead of their own version numbers and some
 sort of incremented build number?
>>>
>>> This is what we do in Kolla and I'd say logistics and simplicity of
>>> implementation. Tags are more than just information for us. We have to
>>
>> But for a user consuming the image, they have no idea what version of
>> nova is in it because the version on the image is tied to a different
>> application entirely.
>
> That's easy enough to check tho (just docker exec into container and

[openstack-dev] [kolla][all] Core mentorship program kickoff

2017-04-19 Thread Michał Jastrzębski
Hello everyone,

On todays meeting we officially started mentorship program in Kolla:)
If you are core or you are interested in becoming one, please sign up
on this etherpad

https://etherpad.openstack.org/p/kolla-mentorship-signup

Idea is to provide safe environment to ask questions, get feedback
from trusted person in core team and ultimately join core team.

Role of mentor is:
1. Make sure to review changes that your student reviewed, providing
feedback to his/hers review as well
2. Review changes your student proposed
3. Answer questions about review process, technical issues and stuff like that
4. Be a trusted friend in community:)
5. Ultimately, when you decide that your student is ready, feel free
to kick off voting process for core addition or let me know, I'll do
it for you

Role of student:
1. Review, review, review, your voice counts
2. Don't be shy to ask your mentor, either openly or privately
3. Care for project en large, care for code and community, it's your
project and someday you might be mentoring another person:)

I encourage everyone to take part in this program! This is just a
pilot, we're figuring it out as we go so help us evolve this effort
and maybe make it more cross-community:)

Regards,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Bridging the production/CI workflow gap with large periodic CI jobs

2017-04-19 Thread Justin Kilpatrick
More nodes is always better but I don't think we need to push the host
cloud to it's absolute limits right away. I have a list of several
pain points I expect to find with just 30ish nodes that should keep us
busy for a while.

I think the optimizations are a good idea though, especially if we
want to pave the way for the next level of this sort of effort. Which
is devs being able to ask for a 'scale ci' run on gerrit and schedule
a decent sized job for whenever it's convenient. The closer we can get
devs to large environments on demand the faster and easier these
issues can be solved.

But for now baby steps.

On Wed, Apr 19, 2017 at 12:30 PM, Ben Nemec  wrote:
> TLDR: We have the capacity to do this.  One scale job can be absorbed into
> our existing test infrastructure with minimal impact.
>
>
> On 04/19/2017 07:50 AM, Flavio Percoco wrote:
>>
>> On 18/04/17 14:28 -0400, Emilien Macchi wrote:
>>>
>>> On Mon, Apr 17, 2017 at 3:52 PM, Justin Kilpatrick
>>>  wrote:

 Because CI jobs tend to max out about 5 nodes there's a whole class of
 minor bugs that make it into releases.

 What happens is that they never show up in small clouds, then when
 they do show up in larger testing clouds the people deploying those
 simply work around the issue and get onto what they where supposed to
 be testing. These workarounds do get documented/BZ'd but since they
 don't block anyone and only show up in large environments they become
 hard for developers to fix.

 So the issue gets stuck in limbo, with nowhere to test a patchset and
 no one owning the issue.

 These issues pile up and pretty soon there is a significant difference
 between the default documented workflow and the 'scale' workflow which
 is filled with workarounds which may or may not be documented
 upstream.

 I'd like to propose getting these issues more visibility to having a
 periodic upstream job that uses 20-30 ovb instances to do a larger
 deployment. Maybe at 3am on a Sunday or some other time where there's
 idle execution capability to exploit. The goal being to make these
 sorts of issues more visible and hopefully get better at fixing them.
>>>
>>>
>>> Wait no, I know some folks at 3am on a Saturday night who use TripleO
>>> CI (ok that was a joke).
>>
>>
>> Jokes apart, it really depends on the TZ and when you schedule it. 3:00
>> UTC on a
>> Sunday is Monday 13:00 in Sydney :) Saturdays might work better but
>> remember
>> that some countries work on Sundays.
>
>
> With the exception of the brief period where the ovb jobs were running at
> full capacity 24 hours a day, there has always been a lull in activity
> during early morning UTC.  Yes, there are people working during that time,
> but generally far fewer and the load on TripleO CI is at its lowest point.
> Honestly I'd be okay running this scale job every night, not just on the
> weekend.  A week of changes is a lot to sift through if a scaling issue
> creeps into one of the many, many projects that affect such things in
> TripleO.
>
> Also, I should note that we're not currently being constrained by absolute
> hardware limits in rh1.  The reason I haven't scaled our concurrent jobs
> higher is that there is already performance degradation when we have a full
> 70 jobs running at once.  This type of scale job would require a lot of
> theoretical resources, but those 30 compute nodes are mostly going to be
> sitting there idle while the controller(s) get deployed, so in reality their
> impact on the infrastructure is going to be less than if we just added more
> concurrent jobs that used 30 additional nodes.  And we do have the
> memory/cpu/disk to spare in rh1 to spin up more vms.
>
> We could also take advantage of heterogeneous OVB environments now so that
> the compute nodes are only 3 GB VMs instead of 8 as they are now. That would
> further reduce the impact of this sort of job.  It would require some tweaks
> to how the testenvs are created, but that shouldn't be a problem.
>
>>
 To be honest I'm not sure this is the best solution, but I'm seeing
 this anti pattern across several issues and I think we should try and
 come up with a solution.

>>>
>>> Yes this proposal is really cool. There is an alternative to run this
>>> periodic scenario outside TripleO CI and send results via email maybe.
>>> But it is something we need to discuss with RDO Cloud people and see
>>> if we would have such resources to make it on a weekly frequency.
>>>
>>> Thanks for bringing this up, it's crucial for us to have this kind of
>>> feedback, now let's take actions.
>>
>>
>> +1
>>
>> Flavio
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 

[openstack-dev] [kolla] Demystifying the kolla-kubernetes gate

2017-04-19 Thread Steven Dake (stdake)
Hey folks,

I am holding a workshop on how the kolla-kubernetes gate operates.  If you are 
interested in this workshop, please sign up here:

http://doodle.com/poll/bee7umevf43nwi6y

Regards,
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All][Elections] Vote Vote Vote in the TC election!

2017-04-19 Thread Kendall Nelson
Hello All,

We are coming down to the last hours for voting in the TC election. Voting
ends until 23:45 April 20th, 2017.

Search your gerrit preferred email address[0] for the following subject:
Poll: OpenStack Technical Committee (TC) Election - October 2016

Yes, the poll has an inaccurate title- it was misnamed at creation and
can't be modified once voting has begun. Sorry for any confusion this may
have caused!

That is your ballot and links you to the voting application. Please vote.
If you have voted, please encourage your colleagues to vote.

Candidate statements are linked to the names of all confirmed candidates:
http://governance.openstack.org/election/#
pike
-tc-candidates


What to do if you don't see the email and have a commit in at least one of
the official programs projects[1]:
  * check the trash of your gerrit Preferred Email address[0], in case it
went into trash or spam
  * wait a bit and check again, in case your email server is a bit slow
  * find the sha of at least one commit from the program project repos[1]
and email the election officials[2]. If we can confirm that you are
entitled to vote, we will add you to the voters list and you will be
emailed a ballot.

Please vote!

Thank you,
Kendall Nelson (diablo_rojo)

[0] Sign into review.openstack.org: Go to Settings > Contact
Information. Look at the email listed as your Preferred Email.
That is where the ballot has been sent.
[1]:
https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=jan-2017

-elections

[2] http://governance.openstack.org/election/#election-officials
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] do we still need non-voting tests for older releases?

2017-04-19 Thread Doug Hellmann
I noticed again today that we have some test jobs running for some
of the Oslo libraries against old versions of services (e.g.,
gate-tempest-dsvm-neutron-src-oslo.log-ubuntu-xenial-newton,
gate-tempest-dsvm-neutron-src-oslo.log-ubuntu-xenial-ocata, and
gate-oslo.log-src-grenade-dsvm-ubuntu-xenial-nv).

I don't remember what those are for, but I imagine they have to do
with testing compatibility. They're all non-voting, though, so maybe
not?

Now that we're constraining libraries in our test systems, I wonder
if we still need the jobs at all?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-19 Thread Doug Hellmann
Excerpts from Clark Boylan's message of 2017-04-19 08:10:43 -0700:
> On Wed, Apr 19, 2017, at 05:54 AM, Julien Danjou wrote:
> > Hoy,
> > 
> > So Gnocchi gate is all broken (agan) because it depends on "pbr" and
> > some new release of oslo.* depends on pbr!=2.1.0.
> > 
> > Neither Gnocchi nor Oslo cares about whatever bug there is in pbr 2.1.0
> > that got in banished by requirements Gods. It does not prevent it to be
> > used e.g. to install the software or get version information. But it
> > does break anything that is not in OpenStack because well, pip installs
> > the latest pbr (2.1.0) and then oslo.* is unhappy about it.
> 
> It actually breaks everything, including OpenStack. Shade and others are
> affected by this as well. The specific problem here is that PBR is a
> setup_requires which means it gets installed by easy_install before
> anything else. This means that the requirements restrictions are not
> applied to it (neither are the constraints). So you get latest PBR from
> easy_install then later when something checks the requirements
> (pkg_resources console script entrypoints?) they break because latest
> PBR isn't allowed.
> 
> We need to stop pinning PBR and more generally stop pinning any
> setup_requires (there are a few more now since setuptools itself is
> starting to use that to list its deps rather than bundling them).
> 
> > So I understand the culprit is probably pip installation scheme, and we
> > can blame him until we fix it. I'm also trying to push pbr 2.2.0 to
> > avoid the entire issue.
> 
> Yes, a new release of PBR undoing the "pin" is the current sane step
> forward for fixing this particular issue. Monty also suggested that we
> gate global requirements changes on requiring changes not pin any
> setup_requires.
> 
> > But for the future, could we stop updating the requirements in oslo libs
> > for no good reason? just because some random OpenStack project hit a bug
> > somewhere?
> > 
> > For example, I've removed requirements update on tooz¹ for more than a
> > year now, which did not break *anything* in the meantime, proving that
> > this process is giving more problem than solutions. Oslo libs doing that
> > automatic update introduce more pain for all consumers than anything (at
> > least not in OpenStack).
> 
> You are likely largely shielded by the constraints list here which is
> derivative of the global requirements list. Basically by using
> constraints you get distilled global requirements and even without being
> part of the requirements updates you'd be shielded from breakages when
> installed via something like devstack or other deployment method using
> constraints.
> 
> > So if we care about Oslo users outside OpenStack, I beg us to stop this
> > crazyness. If we don't, we'll just spend time getting rid of Oslo over
> > the long term…
> 
> I think we know from experience that just stopping (eg reverting to the
> situation we had before requirements and constraints) would lead to
> sadness. Installations would frequently be impossible due to some
> unresolvable error in dependency resolution. Do you have some
> alternative in mind? Perhaps we loosen the in project requirements and
> explicitly state that constraints are known to work due to testing and
> users should use constraints? That would give users control to manage
> their own constraints list too if they wish. Maybe we do this in
> libraries while continuing to be more specific in applications?

At the meeting in Austin, the requirements team accepted my proposal
to stop syncing requirements updates into projects, as described
in https://etherpad.openstack.org/p/ocata-requirements-notes

We haven't been able to find anyone to work on the implementation,
though. I is my understanding that Tony did contact the Telemetry
and Swift teams, who are most interested in this area of change,
about devoting some resources to the tasks outlined in the proposal.

Doug

> 
> > 
> > My 2c,
> > 
> > Cheers,
> > 
> > ¹ Unless some API changed in a dep and we needed to raise the dep,
> > obviously.
> > 
> > -- 
> > Julien Danjou
> > # Free Software hacker
> > # https://julien.danjou.info
> 
> I don't have all the answers, but am fairly certain the situation we
> have today is better than the one from several years ago. It is just not
> perfect. I think we are better served by refining the current setup or
> replacing it with something better but not by reverting.
> 
> Clark
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-19 Thread Julien Danjou
On Wed, Apr 19 2017, Clark Boylan wrote:

> I think we know from experience that just stopping (eg reverting to the
> situation we had before requirements and constraints) would lead to
> sadness. Installations would frequently be impossible due to some
> unresolvable error in dependency resolution. Do you have some
> alternative in mind? Perhaps we loosen the in project requirements and
> explicitly state that constraints are known to work due to testing and
> users should use constraints? That would give users control to manage
> their own constraints list too if they wish. Maybe we do this in
> libraries while continuing to be more specific in applications?

Most of the problem that requirements is trying to solve is related to
upper-constraints, blocking new releases. And this upper constraints are
used in most jobs, preventing most failure that are seen in gates. It
would have "covered" the pbr issue.

What I want to stop here, is the automatic push of blacklisting/capping
of stuff to *everything* in OpenStack as soon as one project have a
problem with something.
> I don't have all the answers, but am fairly certain the situation we
> have today is better than the one from several years ago. It is just not
> perfect. I think we are better served by refining the current setup or
> replacing it with something better but not by reverting.

Agreed, I'm not suggesting to revert everything. Just the automatic push
of random requirements limits and binding to Oslo. And other projects if
you like, we don't do it for a good year anymore in Telemetry, and again
here we saw 0 breakage due to that change. Just more easyness to
install stuff.

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Bridging the production/CI workflow gap with large periodic CI jobs

2017-04-19 Thread Ben Nemec
TLDR: We have the capacity to do this.  One scale job can be absorbed 
into our existing test infrastructure with minimal impact.


On 04/19/2017 07:50 AM, Flavio Percoco wrote:

On 18/04/17 14:28 -0400, Emilien Macchi wrote:

On Mon, Apr 17, 2017 at 3:52 PM, Justin Kilpatrick
 wrote:

Because CI jobs tend to max out about 5 nodes there's a whole class of
minor bugs that make it into releases.

What happens is that they never show up in small clouds, then when
they do show up in larger testing clouds the people deploying those
simply work around the issue and get onto what they where supposed to
be testing. These workarounds do get documented/BZ'd but since they
don't block anyone and only show up in large environments they become
hard for developers to fix.

So the issue gets stuck in limbo, with nowhere to test a patchset and
no one owning the issue.

These issues pile up and pretty soon there is a significant difference
between the default documented workflow and the 'scale' workflow which
is filled with workarounds which may or may not be documented
upstream.

I'd like to propose getting these issues more visibility to having a
periodic upstream job that uses 20-30 ovb instances to do a larger
deployment. Maybe at 3am on a Sunday or some other time where there's
idle execution capability to exploit. The goal being to make these
sorts of issues more visible and hopefully get better at fixing them.


Wait no, I know some folks at 3am on a Saturday night who use TripleO
CI (ok that was a joke).


Jokes apart, it really depends on the TZ and when you schedule it. 3:00
UTC on a
Sunday is Monday 13:00 in Sydney :) Saturdays might work better but
remember
that some countries work on Sundays.


With the exception of the brief period where the ovb jobs were running 
at full capacity 24 hours a day, there has always been a lull in 
activity during early morning UTC.  Yes, there are people working during 
that time, but generally far fewer and the load on TripleO CI is at its 
lowest point.  Honestly I'd be okay running this scale job every night, 
not just on the weekend.  A week of changes is a lot to sift through if 
a scaling issue creeps into one of the many, many projects that affect 
such things in TripleO.


Also, I should note that we're not currently being constrained by 
absolute hardware limits in rh1.  The reason I haven't scaled our 
concurrent jobs higher is that there is already performance degradation 
when we have a full 70 jobs running at once.  This type of scale job 
would require a lot of theoretical resources, but those 30 compute nodes 
are mostly going to be sitting there idle while the controller(s) get 
deployed, so in reality their impact on the infrastructure is going to 
be less than if we just added more concurrent jobs that used 30 
additional nodes.  And we do have the memory/cpu/disk to spare in rh1 to 
spin up more vms.


We could also take advantage of heterogeneous OVB environments now so 
that the compute nodes are only 3 GB VMs instead of 8 as they are now. 
That would further reduce the impact of this sort of job.  It would 
require some tweaks to how the testenvs are created, but that shouldn't 
be a problem.





To be honest I'm not sure this is the best solution, but I'm seeing
this anti pattern across several issues and I think we should try and
come up with a solution.



Yes this proposal is really cool. There is an alternative to run this
periodic scenario outside TripleO CI and send results via email maybe.
But it is something we need to discuss with RDO Cloud people and see
if we would have such resources to make it on a weekly frequency.

Thanks for bringing this up, it's crucial for us to have this kind of
feedback, now let's take actions.


+1

Flavio



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][release] release-model of tripleo-common

2017-04-19 Thread Ben Nemec



On 04/18/2017 10:29 PM, Steve Baker wrote:



On Wed, Apr 19, 2017 at 1:14 PM, Doug Hellmann > wrote:

Excerpts from Steve Baker's message of 2017-04-19 13:05:37 +1200:
> Other than being consumed as a library, tripleo-common is the home
for a
> number of tripleo related files, image building templates, heat
plugins,
> mistral workbooks.
>
> I have a python-tripleoclient[1] change which is failing unit
tests because
> it depends on changes in tripleo-common which have landed in the
current
> cycle. Because tripleo-common is release-model cycle-trailing,
> tripleo-common 7.0.0.0b1 exists but the unit test job pulls in the
last
> full release (6.0.0).
>
> I'd like to know the best way of dealing with this, options are:
> a) make the python import optional, change the unit test to not
require the
> newer tripleo-common
> b) allow the unit test job to pull in pre-release versions like
7.0.0.0b1
> c) change tripleo-common release-model to cycle-with-intermediary and
> immediately release a 7.0.0
>
> I think going with c) would mean doing a major release at the
start of each
> development cycle instead of at the end, then doing releases
throughout the
> cycle following our standard semver.
>
> [1] https://review.openstack.org/#/c/448300/


As a library, tripleo-common should not use pre-release versioning like
alphas and betas because of exactly the problem you've discovered: pip
does not allow them to be installed by default, and so we don't put them
in our constraint list.

So, you can keep tripleo-common as cycle-trailing, but since it's a
library use regular versions following semantic versioning rules to
ensure the new releases go out and can be installed.

You probably do want to start with a 7.0.0 release now, and from
there on use SemVer to increment (rather than automatically releasing
a new major version at the start of each cycle).



OK, thanks. We need to determine now whether to release 7.0.0.0b1 as
7.0.0, or release current master:
http://git.openstack.org/cgit/openstack/tripleo-common/log/


Hmm, I'm probably going to run into the same problem with 
https://review.openstack.org/#/c/431145/ because we start using 
instack-undercloud as a library instead of a standalone project.  While 
we're making this change for tripleo-common anyway I'd like to do it for 
instack-undercloud as well.


It's probably safest to release b1 as 7.0.0 since we know all of those 
b1 releases worked together, but our integration jobs do actually test 
all of the latest branches together so we're probably okay to just 
release master if we want.  There's the possibility of breaking unit 
tests though, I guess.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Development workflow for bunch of patches

2017-04-19 Thread Ben Nemec



On 04/19/2017 03:11 AM, Sławek Kapłoński wrote:

Hello,

I have a question about how to deal with bunch of patches which depends
one on another.
I did patch to neutron (https://review.openstack.org/#/c/449831/) which
is not merged yet but I wanted to start also another patch which is
depend on this one (https://review.openstack.org/#/c/457816/).
Currently I was trying to do something like:
1. git review -d 
2. git checkout -b new_branch_for_second_patch
3. Make second patch, commit all changes
4. git review <— this will ask me if I really want to push two patches
to gerrit so I answered „yes”

Everything is easy for me as long as I’m not doing more changes in first
patch. How I should work with it if I let’s say want to change something
in first patch and later I want to make another change to second patch?
IIRC when I tried to do something like that and I made „git review” to
push changes in second patch, first one was also updated (and I lost
changes made for this one in another branch).
How I should work with something like that? Is there any guide about
that (I couldn’t find such)?


I did a few how-to videos on working with patch series in Gerrit.  The 
first one is here: https://www.youtube.com/watch?v=mHyvP7zp4Ko  There 
are some follow-up ones that discuss other things you can do with a 
patch series too.  The full playlist is here: 
https://www.youtube.com/playlist?list=PLR97FKPZ-mD9XJCfwDE5c-td9lZGIPfS5




—
Best regards
Slawek Kaplonski
sla...@kaplonski.pl 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-doc] [dev] Docs team meeting tomorrow

2017-04-19 Thread Alexandra Settle
Hey everyone,

Just to clarify – that is Thursday the 20th of April, 2100 UTC.

Apologies for any confusion – just attempting to get ahead of the game and 
forgot to remove “today”.

See you there,

Alex

From: Alexandra Settle 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, April 19, 2017 at 11:45 AM
To: "openstack-d...@lists.openstack.org" 
Cc: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [openstack-doc] [dev] Docs team meeting tomorrow

Hey everyone,

The docs meeting will continue today in #openstack-meeting-alt as scheduled 
(Thursday at 21:00 UTC). For more details, and the agenda, see the meeting 
page: - 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

Specialty team leads – if you are unable to attend the meeting, please send me 
your team reports to include in the doc newsletter.

Doc team – I highly recommend you attend. In light of the OSIC news, we are 
heavily affected and attendance would be appreciated to discuss future actions.

Thanks,

Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-19 Thread Michał Jastrzębski
I think LABEL is great idea for all the "informative" stuff. In fact
if we could somehow abuse LABEL to fill it up after we get packages
installed, we could use it for version manifesto. That will make logic
around "if version changed" much easier since we'll have easy access
to this information on both image and container.

Our autopushing mechanism will work with tags and HEAD of stable
branch in this case.

Kevin, then your use case would be done like that:
1. pull container nova-compute:ocata, tag it locally to
nova-compute:ocata-deployed, deploy it
2. every now and then pull fresh nova-compute:ocata from dockerhub
3. compare versions in LABELs to see whether you want to upgrade or not
4. if you do, retag :ocata-deployed to :ocata-old, :ocata to
:ocata-deployed and run upgrade
5. keep ocata-old, revision it, backup it for as long as you want

I also think that we can ship utils to do this in kolla, so people
won't need to write these themselves.

Does that work?

Cheers,
Michal

On 19 April 2017 at 05:02, Flavio Percoco  wrote:
> On 19/04/17 11:20 +0100, Paul Bourke wrote:
>>
>> I'm wondering if moving to using docker labels is a better way of solving
>> the various issue being raised here.
>>
>> We can maintain a tag for each of master/ocata/newton/etc, and on each
>> image have a LABEL with info such as 'pbr of service/pbr of kolla/link to CI
>> of build/etc'. I believe this solves all points Kevin mentioned except
>> rollback, which afaik, OpenStack doesn't support anyway. It also solves
>> people's concerns with what is actually in the images, and is a standard
>> Docker mechanism.
>>
>> Also as Michal mentioned, if users are concerned about keeping images,
>> they can tag and stash them away themselves. It is overkill to maintain
>> hundreds of (imo meaningless) tags in a registry, the majority of which
>> people don't care about - they only want the latest of the branch they're
>> deploying.
>>
>> Every detail of a running Kolla system can be easily deduced by scanning
>> across nodes and printing the labels of running containers, functionality
>> which can be shipped by Kolla. There are also methods for fetching labels of
>> remote images[0][1] for users wishing to inspect what they are upgrading to.
>>
>> [0] https://github.com/projectatomic/skopeo
>> [1] https://github.com/docker/distribution/issues/1252
>
>
>
> You beat me to it, Paul.
>
> I think using lables to communicate the version of each openstack software
> installed in the image is the way to go here. We're looking into doing this
> ourselves as part of the RDO pipeline and it'd be awesome to have it being
> part
> of kolla-build itself. Steve Baker, I believe, was working on this.
>
> The more explicit we are about the contents of the image, the better. People
> want to know what's in there, rather than assuming based on the tag.
>
> Flavio
>
>
>> -Paul
>>
>> On 18/04/17 22:10, Michał Jastrzębski wrote:
>>>
>>> On 18 April 2017 at 13:54, Doug Hellmann  wrote:

 Excerpts from Michał Jastrzębski's message of 2017-04-18 13:37:30 -0700:
>
> On 18 April 2017 at 12:41, Doug Hellmann  wrote:
>>
>> Excerpts from Steve Baker's message of 2017-04-18 10:46:43 +1200:
>>>
>>> On Tue, Apr 18, 2017 at 9:57 AM, Doug Hellmann
>>> 
>>> wrote:
>>>
 Excerpts from Michał Jastrzębski's message of 2017-04-12 15:59:34
 -0700:
>
> My dear Kollegues,
>
> Today we had discussion about how to properly name/tag images being
> pushed to dockerhub. That moved towards general discussion on
> revision
> mgmt.
>
> Problem we're trying to solve is this:
> If you build/push images today, your tag is 4.0
> if you do it tomorrow, it's still 4.0, and will keep being 4.0
> until
> we tag new release.
>
> But image built today is not equal to image built tomorrow, so we
> would like something like 4.0.0-1, 4.0.0-2.
> While we can reasonably detect history of revisions in dockerhub,
> local env will be extremely hard to do.
>
> I'd like to ask you for opinions on desired behavior and how we
> want
> to deal with revision management in general.
>
> Cheers,
> Michal
>

 What's in the images, kolla? Other OpenStack components?
>>>
>>>
>>>
>>> Yes, each image will typically contain all software required for one
>>> OpenStack service, including dependencies from OpenStack projects or
>>> the
>>> base OS. Installed via some combination of git, pip, rpm, deb.
>>>
 Where does the
 4.0.0 come from?


>>> Its the python version string from the kolla project itself, so
>>> ultimately
>>> I think pbr. I'm suggesting that we switch to using the

Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-04-19 Thread Clark Boylan
On Wed, Apr 19, 2017, at 05:54 AM, Julien Danjou wrote:
> Hoy,
> 
> So Gnocchi gate is all broken (agan) because it depends on "pbr" and
> some new release of oslo.* depends on pbr!=2.1.0.
> 
> Neither Gnocchi nor Oslo cares about whatever bug there is in pbr 2.1.0
> that got in banished by requirements Gods. It does not prevent it to be
> used e.g. to install the software or get version information. But it
> does break anything that is not in OpenStack because well, pip installs
> the latest pbr (2.1.0) and then oslo.* is unhappy about it.

It actually breaks everything, including OpenStack. Shade and others are
affected by this as well. The specific problem here is that PBR is a
setup_requires which means it gets installed by easy_install before
anything else. This means that the requirements restrictions are not
applied to it (neither are the constraints). So you get latest PBR from
easy_install then later when something checks the requirements
(pkg_resources console script entrypoints?) they break because latest
PBR isn't allowed.

We need to stop pinning PBR and more generally stop pinning any
setup_requires (there are a few more now since setuptools itself is
starting to use that to list its deps rather than bundling them).

> So I understand the culprit is probably pip installation scheme, and we
> can blame him until we fix it. I'm also trying to push pbr 2.2.0 to
> avoid the entire issue.

Yes, a new release of PBR undoing the "pin" is the current sane step
forward for fixing this particular issue. Monty also suggested that we
gate global requirements changes on requiring changes not pin any
setup_requires.

> But for the future, could we stop updating the requirements in oslo libs
> for no good reason? just because some random OpenStack project hit a bug
> somewhere?
> 
> For example, I've removed requirements update on tooz¹ for more than a
> year now, which did not break *anything* in the meantime, proving that
> this process is giving more problem than solutions. Oslo libs doing that
> automatic update introduce more pain for all consumers than anything (at
> least not in OpenStack).

You are likely largely shielded by the constraints list here which is
derivative of the global requirements list. Basically by using
constraints you get distilled global requirements and even without being
part of the requirements updates you'd be shielded from breakages when
installed via something like devstack or other deployment method using
constraints.

> So if we care about Oslo users outside OpenStack, I beg us to stop this
> crazyness. If we don't, we'll just spend time getting rid of Oslo over
> the long term…

I think we know from experience that just stopping (eg reverting to the
situation we had before requirements and constraints) would lead to
sadness. Installations would frequently be impossible due to some
unresolvable error in dependency resolution. Do you have some
alternative in mind? Perhaps we loosen the in project requirements and
explicitly state that constraints are known to work due to testing and
users should use constraints? That would give users control to manage
their own constraints list too if they wish. Maybe we do this in
libraries while continuing to be more specific in applications?

> 
> My 2c,
> 
> Cheers,
> 
> ¹ Unless some API changed in a dep and we needed to raise the dep,
> obviously.
> 
> -- 
> Julien Danjou
> # Free Software hacker
> # https://julien.danjou.info

I don't have all the answers, but am fairly certain the situation we
have today is better than the one from several years ago. It is just not
perfect. I think we are better served by refining the current setup or
replacing it with something better but not by reverting.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Development workflow for bunch of patches

2017-04-19 Thread Eric Fried
I've always used rebase rather than cherry-pick in this situation.
Bonus is that sometimes (if no conflicts) I can do the rebase in gerrit
with two clicks rather than locally with a bunch of typing.
@kevinbenton, is there a benefit to using cherry-pick rather than rebase?

Thanks,
Eric Fried (efried)

On 04/19/2017 03:39 AM, Sławek Kapłoński wrote:
> Hello,
> 
> Thanks a lot :)
> 
> — 
> Best regards
> Slawek Kaplonski
> sla...@kaplonski.pl 
> 
> 
> 
>> Wiadomość napisana przez Kevin Benton > > w dniu 19.04.2017, o godz. 10:25:
>>
>> Whenever you want to work on the second patch you would need to first
>> checkout the latest version of the first patch and then cherry-pick
>> the later patch on top of it. That way when you update the second one
>> it won't affect the first patch.
>>
>> The -R flag can also be used to prevent unexpected rebases of the
>> parent patch. More details here:
>>
>> https://docs.openstack.org/infra/manual/developers.html#adding-a-dependency
>>
>> On Wed, Apr 19, 2017 at 1:11 AM, Sławek Kapłoński > > wrote:
>>
>> Hello,
>>
>> I have a question about how to deal with bunch of patches which
>> depends one on another.
>> I did patch to neutron (https://review.openstack.org/#/c/449831/
>> ) which is not merged
>> yet but I wanted to start also another patch which is depend on
>> this one (https://review.openstack.org/#/c/457816/
>> ).
>> Currently I was trying to do something like:
>> 1. git review -d 
>> 2. git checkout -b new_branch_for_second_patch
>> 3. Make second patch, commit all changes
>> 4. git review <— this will ask me if I really want to push two
>> patches to gerrit so I answered „yes”
>>
>> Everything is easy for me as long as I’m not doing more changes in
>> first patch. How I should work with it if I let’s say want to
>> change something in first patch and later I want to make another
>> change to second patch? IIRC when I tried to do something like
>> that and I made „git review” to push changes in second patch,
>> first one was also updated (and I lost changes made for this one
>> in another branch).
>> How I should work with something like that? Is there any guide
>> about that (I couldn’t find such)?
>>
>> — 
>> Best regards
>> Slawek Kaplonski
>> sla...@kaplonski.pl 
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptls][tc] help needed filling out project-navigator data

2017-04-19 Thread Jimmy McArthur
Ideally, as far back as your project goes. That way we will have a 
complete API history, per release, on the project navigator.  This also 
helps us determine the project age.


Thanks!
Jimmy


Telles Nobrega 
April 19, 2017 at 9:48 AM
Hi Monty,

quick question, how far into past releases should we go?

Thanks,

--

TELLESNOBREGA

SOFTWARE ENGINEER

Red HatI 

tenob...@redhat.com 

  
TRIED. TESTED. TRUSTED. 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Monty Taylor 
April 18, 2017 at 3:03 PM
Hey everybody!

The Foundation is rolling out a new version of the Project Navigator. 
One of the things it contains is a section that shows API versions 
available for each project for each release. They asked the TC's help 
in providing that data, so we spun up a new repository:


  http://git.openstack.org/cgit/openstack/project-navigator-data

that the Project Navigator will consume.

We need your help!

The repo contains a file for each project for each release with 
CURRENT/SUPPORTED/DEPRECATED major versions and also microversion 
ranges if they exist. The data is pretty much exactly what everyone 
already produces in their version discovery documents - although it's 
normalized into the format described by the API-WG:



https://specs.openstack.org/openstack/api-wg/guidelines/microversion_specification.html#version-discovery 



What would be really helpful is if someone from each project could go 
make a patch to the repo adding the historical (and currently) info 
for your project. We'll come up with a process for maintaining it over 
time - but for now just crowdsourcing the data seems like the best way.


The README file explains the format, and there is data from a few of 
the projects for Newton.


It would be great to include an entry for every release - which for 
many projects will just be the same content copied a bunch of times 
back to the first release the project was part of OpenStack.


This is only needed for service projects (something that registers in 
the keystone catalog) and is only needed for 'main' APIs (like, it is 
not needed, for now, to put in things like Placement)


If y'all could help - it would be super great!

Thanks!
Monty

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptls][tc] help needed filling out project-navigator data

2017-04-19 Thread Telles Nobrega
Hi Monty,

quick question, how far into past releases should we go?

Thanks,

On Tue, Apr 18, 2017 at 5:06 PM Monty Taylor  wrote:

> Hey everybody!
>
> The Foundation is rolling out a new version of the Project Navigator.
> One of the things it contains is a section that shows API versions
> available for each project for each release. They asked the TC's help in
> providing that data, so we spun up a new repository:
>
>http://git.openstack.org/cgit/openstack/project-navigator-data
>
> that the Project Navigator will consume.
>
> We need your help!
>
> The repo contains a file for each project for each release with
> CURRENT/SUPPORTED/DEPRECATED major versions and also microversion ranges
> if they exist. The data is pretty much exactly what everyone already
> produces in their version discovery documents - although it's normalized
> into the format described by the API-WG:
>
>
>
> https://specs.openstack.org/openstack/api-wg/guidelines/microversion_specification.html#version-discovery
>
> What would be really helpful is if someone from each project could go
> make a patch to the repo adding the historical (and currently) info for
> your project. We'll come up with a process for maintaining it over time
> - but for now just crowdsourcing the data seems like the best way.
>
> The README file explains the format, and there is data from a few of the
> projects for Newton.
>
> It would be great to include an entry for every release - which for many
> projects will just be the same content copied a bunch of times back to
> the first release the project was part of OpenStack.
>
> This is only needed for service projects (something that registers in
> the keystone catalog) and is only needed for 'main' APIs (like, it is
> not needed, for now, to put in things like Placement)
>
> If y'all could help - it would be super great!
>
> Thanks!
> Monty
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 

TELLES NOBREGA

SOFTWARE ENGINEER

Red Hat I 

tenob...@redhat.com

TRIED. TESTED. TRUSTED. 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] New CI Job definitions

2017-04-19 Thread Brad P. Crochet
On Tue, Apr 18, 2017 at 2:10 AM Ренат Ахмеров 
wrote:

> Thanks Brad!
>
> So kombu gate is now non-apache, right?
>
>
No. It would be running under mod_wsgi. We can make it non-apache if you
like. Would be pretty easy to do so.


> Thanks
>
> Renat Akhmerov
> @Nokia
>
> On 17 Apr 2017, 22:17 +0700, Brad P. Crochet , wrote:
>
> Hi y'all...
>
> In the midst of trying to track down some errors being seen with tempest
> tests whilst running under mod_wsgi/apache, I've made it so that the
> devstack plugin is capable of also running with mod_wsgi/apache.[1]
>
> In doing so, It's become the default devstack job. I've also now created
> some 'non-apache' jobs that basically are what the old jobs did, just
> renamed.
>
> Also, I've consolidated the job definitions (the original and the kombu)
> to simplify and DRY out the jobs. You can see the infra review here.[2]
>
> The list of jobs will be:
> gate-mistral-devstack-dsvm-ubuntu-xenial-nv
> gate-mistral-devstack-dsvm-non-apache-ubuntu-xenial-nv
> gate-mistral-devstack-dsvm-kombu-ubuntu-xenial-nv
>
> Note that the trusty jobs have been eliminated.
>
> Essentially, I've added a '{special}' tag to the job definition, allowing
> us to create special-cased devstack jobs. So, as you can see, I've migrated
> the kombu job to be such a thing. It should also be possible to combine
> them.
>
> [1] https://review.openstack.org/#/c/454710/
> [2] https://review.openstack.org/#/c/457106/
> --
> Brad P. Crochet, RHCA, RHCE, RHCVA, RHCDS
> Principal Software Engineer
> (c) 704.236.9385 <(704)%20236-9385>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Brad P. Crochet, RHCA, RHCE, RHCVA, RHCDS
Principal Software Engineer
(c) 704.236.9385
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][infra][pbr] Nominating Stephen Finucane for pbr-core

2017-04-19 Thread ChangBo Guo
There is no objection in past 7 days, so I added Stephen Finucane as
pbr-core group,  welecom Stephen !


2017-04-19 0:01 GMT+08:00 Jeremy Stanley :

> On 2017-04-12 08:14:31 -0500 (-0500), Monty Taylor wrote:
> [...]
> > Recently Stephen Finucane (sfinucan) has stepped up to the plate
> > to help sort out issues we've been having. He's shown a lack of
> > fear of the codebase and an understanding of what's going on. He's
> > also following through on patches to projects themselves when
> > needed, which is a huge part of the game. And most importantly he
> > knows when to suggest we _not_ do something.
> [...]
>
> As an occasional pbr author and transitive core, I'm in favor. The
> more help the better!
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Bridging the production/CI workflow gap with large periodic CI jobs

2017-04-19 Thread Flavio Percoco

On 18/04/17 14:28 -0400, Emilien Macchi wrote:

On Mon, Apr 17, 2017 at 3:52 PM, Justin Kilpatrick  wrote:

Because CI jobs tend to max out about 5 nodes there's a whole class of
minor bugs that make it into releases.

What happens is that they never show up in small clouds, then when
they do show up in larger testing clouds the people deploying those
simply work around the issue and get onto what they where supposed to
be testing. These workarounds do get documented/BZ'd but since they
don't block anyone and only show up in large environments they become
hard for developers to fix.

So the issue gets stuck in limbo, with nowhere to test a patchset and
no one owning the issue.

These issues pile up and pretty soon there is a significant difference
between the default documented workflow and the 'scale' workflow which
is filled with workarounds which may or may not be documented
upstream.

I'd like to propose getting these issues more visibility to having a
periodic upstream job that uses 20-30 ovb instances to do a larger
deployment. Maybe at 3am on a Sunday or some other time where there's
idle execution capability to exploit. The goal being to make these
sorts of issues more visible and hopefully get better at fixing them.


Wait no, I know some folks at 3am on a Saturday night who use TripleO
CI (ok that was a joke).


Jokes apart, it really depends on the TZ and when you schedule it. 3:00 UTC on a
Sunday is Monday 13:00 in Sydney :) Saturdays might work better but remember
that some countries work on Sundays.


To be honest I'm not sure this is the best solution, but I'm seeing
this anti pattern across several issues and I think we should try and
come up with a solution.



Yes this proposal is really cool. There is an alternative to run this
periodic scenario outside TripleO CI and send results via email maybe.
But it is something we need to discuss with RDO Cloud people and see
if we would have such resources to make it on a weekly frequency.

Thanks for bringing this up, it's crucial for us to have this kind of
feedback, now let's take actions.


+1

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Can we stop global requirements update?

2017-04-19 Thread Julien Danjou
Hoy,

So Gnocchi gate is all broken (agan) because it depends on "pbr" and
some new release of oslo.* depends on pbr!=2.1.0.

Neither Gnocchi nor Oslo cares about whatever bug there is in pbr 2.1.0
that got in banished by requirements Gods. It does not prevent it to be
used e.g. to install the software or get version information. But it
does break anything that is not in OpenStack because well, pip installs
the latest pbr (2.1.0) and then oslo.* is unhappy about it.

So I understand the culprit is probably pip installation scheme, and we
can blame him until we fix it. I'm also trying to push pbr 2.2.0 to
avoid the entire issue.

But for the future, could we stop updating the requirements in oslo libs
for no good reason? just because some random OpenStack project hit a bug
somewhere?

For example, I've removed requirements update on tooz¹ for more than a
year now, which did not break *anything* in the meantime, proving that
this process is giving more problem than solutions. Oslo libs doing that
automatic update introduce more pain for all consumers than anything (at
least not in OpenStack).

So if we care about Oslo users outside OpenStack, I beg us to stop this
crazyness. If we don't, we'll just spend time getting rid of Oslo over
the long term…

My 2c,

Cheers,

¹ Unless some API changed in a dep and we needed to raise the dep,
obviously.

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][elections]questions about one platform vision

2017-04-19 Thread Flavio Percoco

On 19/04/17 11:17 +0200, Thierry Carrez wrote:

Adam Lawson wrote:

[...]
I've been an OpenStack architect for at least 5+ years now and work with
many large Fortune 100 IT shops. OpenStack in the enterprise is being
used to orchestrate virtual machines. Despite the additional
capabilities OpenStack is trying to accommodate, that's basically it. At
scale, that's what they're doing. Not many are orchestrating bare metal
that I've seen or heard. And they are exploring K8s and Docker Swarm to
orchestrate containers. They aren't looking at OpenStack to do that.


I have to disagree. We have evidence that some of the largest Kubernetes
deployments in the world happen on top of an OpenStack infrastructure,
and hopefully some of those will talk about it in Boston.

I feel like you fall in the common trap of thinking that both
technologies are competing, while one is designed for infrastructure
providers and the other for application deployers. Sure, you can be a
Kubernetes-only shop if you're small enough or have Google-like
discipline (and a lot of those shops, unsurprisingly, were present in
Berlin), but most companies have to offer a wider array of
infrastructure services for their developers. That's where OpenStack, an
open infrastructure stack, comes in. Giving the infrastructure provider
a framework to offer multiple options to application developers and
operators.



Yes, this, a gazillion of times. I do _NOT_ think CNCF and OpenStack are (or
need to be) in competition and I'd rather explore the different ways we can
combine these 2 communities or, more specifically, some of the technologies that
are part of these communities.

To do this, we need to explore ways to make OpenStack more "flexible" so that we
can allow different combinations of OpenStack, we need to allow people to use it
more like a framework.

I definitely don't mean it's the only thing and I'm really against calling
almost anything "the one thing" (unless we're talking about pasta or pizza) and
I believe falling into that trap would damage the community (we barely made it
out in our early years/days).

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [acceleration]Cyborg Weekly Meeting 2017.04.19 agenda

2017-04-19 Thread Zhipeng Huang
Hi Team,

In today's meeting we will review our ongoing spec as usual, hopefully got
some of them approved and moving on to implementation. We will also need to
discuss the DB spec for which Mellanox team won't be able to finish on
time. The last item would be to discuss the Boston Summit related topics.

The meeting is EST 11:00 am at #openstack-cyborg

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-19 Thread Flavio Percoco

On 19/04/17 11:20 +0100, Paul Bourke wrote:
I'm wondering if moving to using docker labels is a better way of 
solving the various issue being raised here.


We can maintain a tag for each of master/ocata/newton/etc, and on each 
image have a LABEL with info such as 'pbr of service/pbr of kolla/link 
to CI of build/etc'. I believe this solves all points Kevin mentioned 
except rollback, which afaik, OpenStack doesn't support anyway. It 
also solves people's concerns with what is actually in the images, and 
is a standard Docker mechanism.


Also as Michal mentioned, if users are concerned about keeping images, 
they can tag and stash them away themselves. It is overkill to 
maintain hundreds of (imo meaningless) tags in a registry, the 
majority of which people don't care about - they only want the latest 
of the branch they're deploying.


Every detail of a running Kolla system can be easily deduced by 
scanning across nodes and printing the labels of running containers, 
functionality which can be shipped by Kolla. There are also methods 
for fetching labels of remote images[0][1] for users wishing to 
inspect what they are upgrading to.


[0] https://github.com/projectatomic/skopeo
[1] https://github.com/docker/distribution/issues/1252



You beat me to it, Paul.

I think using lables to communicate the version of each openstack software
installed in the image is the way to go here. We're looking into doing this
ourselves as part of the RDO pipeline and it'd be awesome to have it being part
of kolla-build itself. Steve Baker, I believe, was working on this.

The more explicit we are about the contents of the image, the better. People
want to know what's in there, rather than assuming based on the tag.

Flavio


-Paul

On 18/04/17 22:10, Michał Jastrzębski wrote:

On 18 April 2017 at 13:54, Doug Hellmann  wrote:

Excerpts from Michał Jastrzębski's message of 2017-04-18 13:37:30 -0700:

On 18 April 2017 at 12:41, Doug Hellmann  wrote:

Excerpts from Steve Baker's message of 2017-04-18 10:46:43 +1200:

On Tue, Apr 18, 2017 at 9:57 AM, Doug Hellmann 
wrote:


Excerpts from Michał Jastrzębski's message of 2017-04-12 15:59:34 -0700:

My dear Kollegues,

Today we had discussion about how to properly name/tag images being
pushed to dockerhub. That moved towards general discussion on revision
mgmt.

Problem we're trying to solve is this:
If you build/push images today, your tag is 4.0
if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
we tag new release.

But image built today is not equal to image built tomorrow, so we
would like something like 4.0.0-1, 4.0.0-2.
While we can reasonably detect history of revisions in dockerhub,
local env will be extremely hard to do.

I'd like to ask you for opinions on desired behavior and how we want
to deal with revision management in general.

Cheers,
Michal



What's in the images, kolla? Other OpenStack components?



Yes, each image will typically contain all software required for one
OpenStack service, including dependencies from OpenStack projects or the
base OS. Installed via some combination of git, pip, rpm, deb.


Where does the
4.0.0 come from?



Its the python version string from the kolla project itself, so ultimately
I think pbr. I'm suggesting that we switch to using the
version.release_string[1] which will tag with the longer version we use for
other dev packages.

[1]https://review.openstack.org/#/c/448380/1/kolla/common/config.py


Why are you tagging the artifacts containing other projects with the
version number of kolla, instead of their own version numbers and some
sort of incremented build number?


This is what we do in Kolla and I'd say logistics and simplicity of
implementation. Tags are more than just information for us. We have to


But for a user consuming the image, they have no idea what version of
nova is in it because the version on the image is tied to a different
application entirely.


That's easy enough to check tho (just docker exec into container and
do pip freeze). On the other hand you'll have information that "this
set of various versions was tested together" which is arguably more
important.


deploy these images and we have to know a tag. Combine that with clear
separation of build phase from deployment phase (really build phase is
entirely optional thanks to dockerhub), you'll end up with either
automagical script that will have to somehow detect correct version
mix of containers that works with each other, or hand crafted list
that will have 100+ versions hardcoded.

Incremental build is hard because builds are atomic and you never
really know how many times images were rebuilt (also local rebuilt vs
dockerhub-pushed rebuild will cause collisions in tags).


Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-19 Thread Bogdan Dobrelya
On 19.04.2017 12:39, Britt Houser (bhouser) wrote:
> I agree with Paul here.  I like the idea of solving this with labels instead 
> of tags.  A label is imbedded into the docker image, and if it changes, the 
> checksum of the image changes.  A tag is kept in the image manifest, and can 
> be altered w/o changing the underlying image.  So to me a label is better 
> IMHO, b/c it preserves this data within the image itself in a manner which is 
> easy to detect if its been altered.
> 
> thx,
> britt

+1, very good idea. Binding released artifacts with a checksum is indeed
a way batter than unreliable tags!

-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Florian Fuchs for tripleo-validations core

2017-04-19 Thread Florian Fuchs
On Wed, Apr 19, 2017 at 12:38 AM, Emilien Macchi  wrote:
> On Tue, Apr 18, 2017 at 6:20 PM, Jason E. Rist  wrote:
>> On 04/18/2017 02:28 AM, Steven Hardy wrote:
>>> On Thu, Apr 06, 2017 at 11:53:04AM +0200, Martin André wrote:
>>> > Hellooo,
>>> >
>>> > I'd like to propose we extend Florian Fuchs +2 powers to the
>>> > tripleo-validations project. Florian is already core on tripleo-ui
>>> > (well, tripleo technically so this means there is no changes to make
>>> > to gerrit groups).
>>> >
>>> > Florian took over many of the stalled patches in tripleo-validations
>>> > and is now the principal contributor in the project [1]. He has built
>>> > a good expertise over the last months and I think it's time he has
>>> > officially the right to approve changes in tripleo-validations.
>>> >
>>> > Consider this my +1 vote.
>>>
>>> +1
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> What do we have to do to make this official?
>
> done

Thank you everyone!

Florian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-doc] [dev] Docs team meeting tomorrow

2017-04-19 Thread Alexandra Settle
Hey everyone,

The docs meeting will continue today in #openstack-meeting-alt as scheduled 
(Thursday at 21:00 UTC). For more details, and the agenda, see the meeting 
page: - 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

Specialty team leads – if you are unable to attend the meeting, please send me 
your team reports to include in the doc newsletter.

Doc team – I highly recommend you attend. In light of the OSIC news, we are 
heavily affected and attendance would be appreciated to discuss future actions.

Thanks,

Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-19 Thread Britt Houser (bhouser)
I agree with Paul here.  I like the idea of solving this with labels instead of 
tags.  A label is imbedded into the docker image, and if it changes, the 
checksum of the image changes.  A tag is kept in the image manifest, and can be 
altered w/o changing the underlying image.  So to me a label is better IMHO, 
b/c it preserves this data within the image itself in a manner which is easy to 
detect if its been altered.

thx,
britt


From: Paul Bourke 
Sent: Apr 19, 2017 6:28 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

I'm wondering if moving to using docker labels is a better way of
solving the various issue being raised here.

We can maintain a tag for each of master/ocata/newton/etc, and on each
image have a LABEL with info such as 'pbr of service/pbr of kolla/link
to CI of build/etc'. I believe this solves all points Kevin mentioned
except rollback, which afaik, OpenStack doesn't support anyway. It also
solves people's concerns with what is actually in the images, and is a
standard Docker mechanism.

Also as Michal mentioned, if users are concerned about keeping images,
they can tag and stash them away themselves. It is overkill to maintain
hundreds of (imo meaningless) tags in a registry, the majority of which
people don't care about - they only want the latest of the branch
they're deploying.

Every detail of a running Kolla system can be easily deduced by scanning
across nodes and printing the labels of running containers,
functionality which can be shipped by Kolla. There are also methods for
fetching labels of remote images[0][1] for users wishing to inspect what
they are upgrading to.

[0] https://github.com/projectatomic/skopeo
[1] https://github.com/docker/distribution/issues/1252

-Paul

On 18/04/17 22:10, Michał Jastrzębski wrote:
> On 18 April 2017 at 13:54, Doug Hellmann  wrote:
>> Excerpts from Michał Jastrzębski's message of 2017-04-18 13:37:30 -0700:
>>> On 18 April 2017 at 12:41, Doug Hellmann  wrote:
 Excerpts from Steve Baker's message of 2017-04-18 10:46:43 +1200:
> On Tue, Apr 18, 2017 at 9:57 AM, Doug Hellmann 
> wrote:
>
>> Excerpts from Michał Jastrzębski's message of 2017-04-12 15:59:34 -0700:
>>> My dear Kollegues,
>>>
>>> Today we had discussion about how to properly name/tag images being
>>> pushed to dockerhub. That moved towards general discussion on revision
>>> mgmt.
>>>
>>> Problem we're trying to solve is this:
>>> If you build/push images today, your tag is 4.0
>>> if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
>>> we tag new release.
>>>
>>> But image built today is not equal to image built tomorrow, so we
>>> would like something like 4.0.0-1, 4.0.0-2.
>>> While we can reasonably detect history of revisions in dockerhub,
>>> local env will be extremely hard to do.
>>>
>>> I'd like to ask you for opinions on desired behavior and how we want
>>> to deal with revision management in general.
>>>
>>> Cheers,
>>> Michal
>>>
>>
>> What's in the images, kolla? Other OpenStack components?
>
>
> Yes, each image will typically contain all software required for one
> OpenStack service, including dependencies from OpenStack projects or the
> base OS. Installed via some combination of git, pip, rpm, deb.
>
>> Where does the
>> 4.0.0 come from?
>>
>>
> Its the python version string from the kolla project itself, so ultimately
> I think pbr. I'm suggesting that we switch to using the
> version.release_string[1] which will tag with the longer version we use 
> for
> other dev packages.
>
> [1]https://review.openstack.org/#/c/448380/1/kolla/common/config.py

 Why are you tagging the artifacts containing other projects with the
 version number of kolla, instead of their own version numbers and some
 sort of incremented build number?
>>>
>>> This is what we do in Kolla and I'd say logistics and simplicity of
>>> implementation. Tags are more than just information for us. We have to
>>
>> But for a user consuming the image, they have no idea what version of
>> nova is in it because the version on the image is tied to a different
>> application entirely.
>
> That's easy enough to check tho (just docker exec into container and
> do pip freeze). On the other hand you'll have information that "this
> set of various versions was tested together" which is arguably more
> important.
>
>>> deploy these images and we have to know a tag. Combine that with clear
>>> separation of build phase from deployment phase (really build phase is
>>> entirely optional thanks to dockerhub), you'll end up with either
>>> automagical script that will have to somehow detect correct 

Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-19 Thread Paul Bourke
I'm wondering if moving to using docker labels is a better way of 
solving the various issue being raised here.


We can maintain a tag for each of master/ocata/newton/etc, and on each 
image have a LABEL with info such as 'pbr of service/pbr of kolla/link 
to CI of build/etc'. I believe this solves all points Kevin mentioned 
except rollback, which afaik, OpenStack doesn't support anyway. It also 
solves people's concerns with what is actually in the images, and is a 
standard Docker mechanism.


Also as Michal mentioned, if users are concerned about keeping images, 
they can tag and stash them away themselves. It is overkill to maintain 
hundreds of (imo meaningless) tags in a registry, the majority of which 
people don't care about - they only want the latest of the branch 
they're deploying.


Every detail of a running Kolla system can be easily deduced by scanning 
across nodes and printing the labels of running containers, 
functionality which can be shipped by Kolla. There are also methods for 
fetching labels of remote images[0][1] for users wishing to inspect what 
they are upgrading to.


[0] https://github.com/projectatomic/skopeo
[1] https://github.com/docker/distribution/issues/1252

-Paul

On 18/04/17 22:10, Michał Jastrzębski wrote:

On 18 April 2017 at 13:54, Doug Hellmann  wrote:

Excerpts from Michał Jastrzębski's message of 2017-04-18 13:37:30 -0700:

On 18 April 2017 at 12:41, Doug Hellmann  wrote:

Excerpts from Steve Baker's message of 2017-04-18 10:46:43 +1200:

On Tue, Apr 18, 2017 at 9:57 AM, Doug Hellmann 
wrote:


Excerpts from Michał Jastrzębski's message of 2017-04-12 15:59:34 -0700:

My dear Kollegues,

Today we had discussion about how to properly name/tag images being
pushed to dockerhub. That moved towards general discussion on revision
mgmt.

Problem we're trying to solve is this:
If you build/push images today, your tag is 4.0
if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
we tag new release.

But image built today is not equal to image built tomorrow, so we
would like something like 4.0.0-1, 4.0.0-2.
While we can reasonably detect history of revisions in dockerhub,
local env will be extremely hard to do.

I'd like to ask you for opinions on desired behavior and how we want
to deal with revision management in general.

Cheers,
Michal



What's in the images, kolla? Other OpenStack components?



Yes, each image will typically contain all software required for one
OpenStack service, including dependencies from OpenStack projects or the
base OS. Installed via some combination of git, pip, rpm, deb.


Where does the
4.0.0 come from?



Its the python version string from the kolla project itself, so ultimately
I think pbr. I'm suggesting that we switch to using the
version.release_string[1] which will tag with the longer version we use for
other dev packages.

[1]https://review.openstack.org/#/c/448380/1/kolla/common/config.py


Why are you tagging the artifacts containing other projects with the
version number of kolla, instead of their own version numbers and some
sort of incremented build number?


This is what we do in Kolla and I'd say logistics and simplicity of
implementation. Tags are more than just information for us. We have to


But for a user consuming the image, they have no idea what version of
nova is in it because the version on the image is tied to a different
application entirely.


That's easy enough to check tho (just docker exec into container and
do pip freeze). On the other hand you'll have information that "this
set of various versions was tested together" which is arguably more
important.


deploy these images and we have to know a tag. Combine that with clear
separation of build phase from deployment phase (really build phase is
entirely optional thanks to dockerhub), you'll end up with either
automagical script that will have to somehow detect correct version
mix of containers that works with each other, or hand crafted list
that will have 100+ versions hardcoded.

Incremental build is hard because builds are atomic and you never
really know how many times images were rebuilt (also local rebuilt vs
dockerhub-pushed rebuild will cause collisions in tags).


Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [tc][elections]questions about one platform vision

2017-04-19 Thread Thierry Carrez
Adam Lawson wrote:
> [...]
> I've been an OpenStack architect for at least 5+ years now and work with
> many large Fortune 100 IT shops. OpenStack in the enterprise is being
> used to orchestrate virtual machines. Despite the additional
> capabilities OpenStack is trying to accommodate, that's basically it. At
> scale, that's what they're doing. Not many are orchestrating bare metal
> that I've seen or heard. And they are exploring K8s and Docker Swarm to
> orchestrate containers. They aren't looking at OpenStack to do that.

I have to disagree. We have evidence that some of the largest Kubernetes
deployments in the world happen on top of an OpenStack infrastructure,
and hopefully some of those will talk about it in Boston.

I feel like you fall in the common trap of thinking that both
technologies are competing, while one is designed for infrastructure
providers and the other for application deployers. Sure, you can be a
Kubernetes-only shop if you're small enough or have Google-like
discipline (and a lot of those shops, unsurprisingly, were present in
Berlin), but most companies have to offer a wider array of
infrastructure services for their developers. That's where OpenStack, an
open infrastructure stack, comes in. Giving the infrastructure provider
a framework to offer multiple options to application developers and
operators.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]weekly meeting of Apr.19

2017-04-19 Thread joehuang
Hello, team,

Agenda of Apr.19 weekly meeting:

  1.  feature implementation review
  2.  weekly meeting time
  3.  Pike-1.5 (May.2) preparation
  4.  Open Discussion

How to join:
#  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on 
every Wednesday starting from UTC 14:00.


If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Development workflow for bunch of patches

2017-04-19 Thread Sławek Kapłoński
Hello,

Thanks a lot :)

— 
Best regards
Slawek Kaplonski
sla...@kaplonski.pl



> Wiadomość napisana przez Kevin Benton  w dniu 19.04.2017, o 
> godz. 10:25:
> 
> Whenever you want to work on the second patch you would need to first 
> checkout the latest version of the first patch and then cherry-pick the later 
> patch on top of it. That way when you update the second one it won't affect 
> the first patch.
> 
> The -R flag can also be used to prevent unexpected rebases of the parent 
> patch. More details here:
> 
> https://docs.openstack.org/infra/manual/developers.html#adding-a-dependency 
> 
> 
> On Wed, Apr 19, 2017 at 1:11 AM, Sławek Kapłoński  > wrote:
> Hello,
> 
> I have a question about how to deal with bunch of patches which depends one 
> on another.
> I did patch to neutron (https://review.openstack.org/#/c/449831/ 
> ) which is not merged yet but I 
> wanted to start also another patch which is depend on this one 
> (https://review.openstack.org/#/c/457816/ 
> ).
> Currently I was trying to do something like:
> 1. git review -d 
> 2. git checkout -b new_branch_for_second_patch
> 3. Make second patch, commit all changes
> 4. git review <— this will ask me if I really want to push two patches to 
> gerrit so I answered „yes”
> 
> Everything is easy for me as long as I’m not doing more changes in first 
> patch. How I should work with it if I let’s say want to change something in 
> first patch and later I want to make another change to second patch? IIRC 
> when I tried to do something like that and I made „git review” to push 
> changes in second patch, first one was also updated (and I lost changes made 
> for this one in another branch).
> How I should work with something like that? Is there any guide about that (I 
> couldn’t find such)?
> 
> — 
> Best regards
> Slawek Kaplonski
> sla...@kaplonski.pl 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Development workflow for bunch of patches

2017-04-19 Thread Kevin Benton
Whenever you want to work on the second patch you would need to first
checkout the latest version of the first patch and then cherry-pick the
later patch on top of it. That way when you update the second one it won't
affect the first patch.

The -R flag can also be used to prevent unexpected rebases of the parent
patch. More details here:

https://docs.openstack.org/infra/manual/developers.html#adding-a-dependency

On Wed, Apr 19, 2017 at 1:11 AM, Sławek Kapłoński 
wrote:

> Hello,
>
> I have a question about how to deal with bunch of patches which depends
> one on another.
> I did patch to neutron (https://review.openstack.org/#/c/449831/) which
> is not merged yet but I wanted to start also another patch which is depend
> on this one (https://review.openstack.org/#/c/457816/).
> Currently I was trying to do something like:
> 1. git review -d 
> 2. git checkout -b new_branch_for_second_patch
> 3. Make second patch, commit all changes
> 4. git review <— this will ask me if I really want to push two patches to
> gerrit so I answered „yes”
>
> Everything is easy for me as long as I’m not doing more changes in first
> patch. How I should work with it if I let’s say want to change something in
> first patch and later I want to make another change to second patch? IIRC
> when I tried to do something like that and I made „git review” to push
> changes in second patch, first one was also updated (and I lost changes
> made for this one in another branch).
> How I should work with something like that? Is there any guide about that
> (I couldn’t find such)?
>
> —
> Best regards
> Slawek Kaplonski
> sla...@kaplonski.pl
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Development workflow for bunch of patches

2017-04-19 Thread Sławek Kapłoński
Hello,

I have a question about how to deal with bunch of patches which depends one on 
another.
I did patch to neutron (https://review.openstack.org/#/c/449831/ 
) which is not merged yet but I 
wanted to start also another patch which is depend on this one 
(https://review.openstack.org/#/c/457816/ 
).
Currently I was trying to do something like:
1. git review -d 
2. git checkout -b new_branch_for_second_patch
3. Make second patch, commit all changes
4. git review <— this will ask me if I really want to push two patches to 
gerrit so I answered „yes”

Everything is easy for me as long as I’m not doing more changes in first patch. 
How I should work with it if I let’s say want to change something in first 
patch and later I want to make another change to second patch? IIRC when I 
tried to do something like that and I made „git review” to push changes in 
second patch, first one was also updated (and I lost changes made for this one 
in another branch).
How I should work with something like that? Is there any guide about that (I 
couldn’t find such)?

— 
Best regards
Slawek Kaplonski
sla...@kaplonski.pl



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder]Should os-brick update iSCSI node.startup to "automatic"?

2017-04-19 Thread Rikimaru Honjo

Hi all,

I reported a following bug of os-brick's iSCSI feature and pushed a patch for 
it.

* os-brick's iscsi initiator unexpectedly reverts node.startup from "automatic" to 
"manual".
  https://bugs.launchpad.net/os-brick/+bug/1670237

The patch got -2, but I think that this -2 is based on a misunderstanding.
I explained it on gerrit, but there were no reactions.
So I'd like to hear your opinions!

The important points of the report/patch are,

* Executing "iscsiadm -m discovery..." forcibly reverts node.startup from
  "automatic" to default value "manual".
  os-brick executes that command.
  And, current os-brick also updates node.startup to "automatic".
  As a result, automatic nodes and manual nodes are mixed now.

My opinion for the above issue,

* No one needs node.startup=automatic now.
  os-brick users[1] create/re-create iSCSI sessions when they need it.
  So "manual" is enough.
* Therefore, IMO, os-brick shouldn't update node.startup to "automatic".
* If by any chance someone needs node.startup=automatic, he should set default 
value
  as "automatic" in iscsi.conf.

[1]e.g. nova,cinder...

Regards,
--
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
Rikimaru Honjo
E-mail:honjo.rikim...@po.ntt-tx.co.jp




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [congress] meeting topics

2017-04-19 Thread Eric K
Hi all,

Proposed topics for the next congress irc meeting are tracked in this
etherpad: https://etherpad.openstack.org/p/congress-meeting-topics
Feel free to add additional topics and/or comment on existing ones. Thanks!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] [ceilometer] Difference between publishers and dispatchers?

2017-04-19 Thread Andres Alvarez
Thanks for clearing that guys. Cheers!

On Tue, Apr 18, 2017 at 9:40 AM, Andres Alvarez 
wrote:

> Hi Julien
>
> Thanks for your response. So does this mean that dispatchers will also be
> deprecated (if not already deprecated) in favor of only using publishers?
>
> On Mon, Apr 17, 2017 at 5:49 PM, Julien Danjou  wrote:
>
>> On Mon, Apr 17 2017, Andres Alvarez wrote:
>>
>> Hi Andres,
>>
>> > I am a bit confused on what is the difference between dispatchers and
>> > publishers in Ceilometer. The documentation explains a bit about
>> publishers
>> > in the pipeline, but it does not mention much (if anything) about
>> > dispatchers.
>>
>> Publishers are configured into the pipeline to indicate where to push
>> samples data (e.g. to Gnocchi).
>> One of the publisher is notifier:// which sends the samples to the (now
>> deprecated) ceilometer-collector process.
>>
>> Ceilometer collector stores data into other system via a dispatcher
>> mechanism (e.g. to Gnocchi). It's now deprecated as it's just, with
>> current architecture, a unnecessary step: publishers can do the job
>> directly.
>>
>> --
>> Julien Danjou
>> # Free Software hacker
>> # https://julien.danjou.info
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev