Re: [openstack-dev] [qa] [patrole] Question regarding patrole release management

2017-04-18 Thread Ghanshyam Mann
On Wed, Apr 19, 2017 at 2:29 AM, MONTEIRO, FELIPE C  wrote:
> Hi all,
>
>
>
> I have a question regarding patrole and release management. Many projects
> like heat or murano have a tempest plugin within their repos, so by
> extension their tempest plugins have releases, as the projects change over
> time. However, since patrole is just a tempest plugin, yet heavily reliant
> on tempest, how should patrole do release management?

Hi Felipe,

Release management depends on whether Patrole is planning the
branchless model (i think it is) like Tempest or branch model., If
branchless, then it does not fall under release management. It can
adopt release model what Tempest has [1].

Branchless model gives many benefit like avoid backward incompatible
changes, avoid maintaining multiple Patrole repo across releases etc.

> Am I correct in
> thinking that it should, in the first place? With nova-network and other
> APIs slated for deprecation in Pike and beyond, Patrole will logically have
> to continuously be maintained to keep up, meaning that older tests, just
> like with Tempest, will have to be phased out. If Patrole, then, does not
> have releases, then older release-dependent tests and functionality will
> over time be lost.

We have 3 cases here:
1. Functionality is deprecated/removed in new release.
2. Functionality newly added in new release.
3. Policy enforcement change.

For case 1, Tempest keep testing deprecated functionality till it is
marked deprecated across all supported stable branches. Once all
stable branch has that functionality as deprecated marked, then we
discuss of removing its testing from Tempest.
With API Microversion model that is little bit different where
functionality might be deprecated after specific version or it is
deprecated from base version itself. For example, Nova proxy APIs
deprecated after 2.36, Certificate APIs might be gone from base
version (which is under discussion). This is more case by case and
based on all stack holder point of view, we will decide their testing
should be removed or stay till when.

For case 2, Tempest introduced testing of new functionality with
feature flag and those tests will be executed/skipped accordingly.

Case 3 is something important to consider for Patrole. Usually policy
changes will be done with backward compatible way where changes does
not break upgrade. Any change in policy enforcement will be done at
least with supporting old and new rules [2] or with old rules
deprecation of period of 1 release cycle at least. And branchless
model can detect any accidental changes which going to break upgrade.

IMO, branchless model is good value for Patrole in all 3 cases
consideration and feature flags to handle the new/old/policy-change
functionality. Similarly release model can be same as Tempest has.


>
>
>
> Thank You,
>
> Felipe Monteiro

..1 https://wiki.openstack.org/wiki/QA/releases
..2 https://review.openstack.org/#/c/391113/13

-gmann


>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we delete the (unexposed) os-pci API?

2017-04-18 Thread Matt Riedemann

On 3/17/2017 3:23 PM, Sean Dague wrote:

Yes... with fire.

Realistically this was about the pinnacle of the extensions on
extensions API changes, which is why we didn't even let it into v2 in
the first place.


Here it is: https://review.openstack.org/457854

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][release] release-model of tripleo-common

2017-04-18 Thread Steve Baker
On Wed, Apr 19, 2017 at 1:14 PM, Doug Hellmann 
wrote:

> Excerpts from Steve Baker's message of 2017-04-19 13:05:37 +1200:
> > Other than being consumed as a library, tripleo-common is the home for a
> > number of tripleo related files, image building templates, heat plugins,
> > mistral workbooks.
> >
> > I have a python-tripleoclient[1] change which is failing unit tests
> because
> > it depends on changes in tripleo-common which have landed in the current
> > cycle. Because tripleo-common is release-model cycle-trailing,
> > tripleo-common 7.0.0.0b1 exists but the unit test job pulls in the last
> > full release (6.0.0).
> >
> > I'd like to know the best way of dealing with this, options are:
> > a) make the python import optional, change the unit test to not require
> the
> > newer tripleo-common
> > b) allow the unit test job to pull in pre-release versions like 7.0.0.0b1
> > c) change tripleo-common release-model to cycle-with-intermediary and
> > immediately release a 7.0.0
> >
> > I think going with c) would mean doing a major release at the start of
> each
> > development cycle instead of at the end, then doing releases throughout
> the
> > cycle following our standard semver.
> >
> > [1] https://review.openstack.org/#/c/448300/
>
> As a library, tripleo-common should not use pre-release versioning like
> alphas and betas because of exactly the problem you've discovered: pip
> does not allow them to be installed by default, and so we don't put them
> in our constraint list.
>
> So, you can keep tripleo-common as cycle-trailing, but since it's a
> library use regular versions following semantic versioning rules to
> ensure the new releases go out and can be installed.
>
> You probably do want to start with a 7.0.0 release now, and from
> there on use SemVer to increment (rather than automatically releasing
> a new major version at the start of each cycle).
>
>
>
OK, thanks. We need to determine now whether to release 7.0.0.0b1 as 7.0.0,
or release current master:
http://git.openstack.org/cgit/openstack/tripleo-common/log/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] virtual midcycle thursday 20 april

2017-04-18 Thread Brian Rosmaita
The Glance virtual midcycle meeting will be held on Thursday 20 April
2017 from 14:00-17:00 UTC.

Please signup on the agenda page:
https://etherpad.openstack.org/p/glance-pike-virtual-midcycle

The meeting will be held in Zoom.  Connection information is on the
midcycle etherpad.

See you there!
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Bridging the production/CI workflow gap with large periodic CI jobs

2017-04-18 Thread Wesley Hayutin
On Tue, Apr 18, 2017 at 2:28 PM, Emilien Macchi  wrote:

> On Mon, Apr 17, 2017 at 3:52 PM, Justin Kilpatrick 
> wrote:
> > Because CI jobs tend to max out about 5 nodes there's a whole class of
> > minor bugs that make it into releases.
> >
> > What happens is that they never show up in small clouds, then when
> > they do show up in larger testing clouds the people deploying those
> > simply work around the issue and get onto what they where supposed to
> > be testing. These workarounds do get documented/BZ'd but since they
> > don't block anyone and only show up in large environments they become
> > hard for developers to fix.
> >
> > So the issue gets stuck in limbo, with nowhere to test a patchset and
> > no one owning the issue.
> >
> > These issues pile up and pretty soon there is a significant difference
> > between the default documented workflow and the 'scale' workflow which
> > is filled with workarounds which may or may not be documented
> > upstream.
> >
> > I'd like to propose getting these issues more visibility to having a
> > periodic upstream job that uses 20-30 ovb instances to do a larger
> > deployment. Maybe at 3am on a Sunday or some other time where there's
> > idle execution capability to exploit. The goal being to make these
> > sorts of issues more visible and hopefully get better at fixing them.
>
> Wait no, I know some folks at 3am on a Saturday night who use TripleO
> CI (ok that was a joke).
>
> > To be honest I'm not sure this is the best solution, but I'm seeing
> > this anti pattern across several issues and I think we should try and
> > come up with a solution.
> >
>
> Yes this proposal is really cool. There is an alternative to run this
> periodic scenario outside TripleO CI and send results via email maybe.
> But it is something we need to discuss with RDO Cloud people and see
> if we would have such resources to make it on a weekly frequency.
>

+1
I think with RDO Cloud it's possible to run a test of that scale either in
the
tripleo system or just report results, either would be great.  Until RDO
Cloud
is full production we might as well begin by running a job internally with
the master-tripleo-ci release config file.  The browbeat jobs are logging
[1] here
will be fairly simple step to run w/ the upstream content.

Adding Arx Cruz as he is point on a tool that distrubutes test results from
the tripleo periodic jobs that may come in handy for this scale test.  I'll
probably put you two in touch tomorrow.

I'm still looking for opportunities to run browbeat in upstream tripleo as
well.
Could be a productive sync up :)

[1] https://thirdparty-logs.rdoproject.org/

Thanks!



>
> Thanks for bringing this up, it's crucial for us to have this kind of
> feedback, now let's take actions.
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][release] release-model of tripleo-common

2017-04-18 Thread Doug Hellmann
Excerpts from Steve Baker's message of 2017-04-19 13:05:37 +1200:
> Other than being consumed as a library, tripleo-common is the home for a
> number of tripleo related files, image building templates, heat plugins,
> mistral workbooks.
> 
> I have a python-tripleoclient[1] change which is failing unit tests because
> it depends on changes in tripleo-common which have landed in the current
> cycle. Because tripleo-common is release-model cycle-trailing,
> tripleo-common 7.0.0.0b1 exists but the unit test job pulls in the last
> full release (6.0.0).
> 
> I'd like to know the best way of dealing with this, options are:
> a) make the python import optional, change the unit test to not require the
> newer tripleo-common
> b) allow the unit test job to pull in pre-release versions like 7.0.0.0b1
> c) change tripleo-common release-model to cycle-with-intermediary and
> immediately release a 7.0.0
> 
> I think going with c) would mean doing a major release at the start of each
> development cycle instead of at the end, then doing releases throughout the
> cycle following our standard semver.
> 
> [1] https://review.openstack.org/#/c/448300/

As a library, tripleo-common should not use pre-release versioning like
alphas and betas because of exactly the problem you've discovered: pip
does not allow them to be installed by default, and so we don't put them
in our constraint list.

So, you can keep tripleo-common as cycle-trailing, but since it's a
library use regular versions following semantic versioning rules to
ensure the new releases go out and can be installed.

You probably do want to start with a 7.0.0 release now, and from
there on use SemVer to increment (rather than automatically releasing
a new major version at the start of each cycle).

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][release] release-model of tripleo-common

2017-04-18 Thread Steve Baker
Other than being consumed as a library, tripleo-common is the home for a
number of tripleo related files, image building templates, heat plugins,
mistral workbooks.

I have a python-tripleoclient[1] change which is failing unit tests because
it depends on changes in tripleo-common which have landed in the current
cycle. Because tripleo-common is release-model cycle-trailing,
tripleo-common 7.0.0.0b1 exists but the unit test job pulls in the last
full release (6.0.0).

I'd like to know the best way of dealing with this, options are:
a) make the python import optional, change the unit test to not require the
newer tripleo-common
b) allow the unit test job to pull in pre-release versions like 7.0.0.0b1
c) change tripleo-common release-model to cycle-with-intermediary and
immediately release a 7.0.0

I think going with c) would mean doing a major release at the start of each
development cycle instead of at the end, then doing releases throughout the
cycle following our standard semver.

[1] https://review.openstack.org/#/c/448300/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Florian Fuchs for tripleo-validations core

2017-04-18 Thread Emilien Macchi
On Tue, Apr 18, 2017 at 6:20 PM, Jason E. Rist  wrote:
> On 04/18/2017 02:28 AM, Steven Hardy wrote:
>> On Thu, Apr 06, 2017 at 11:53:04AM +0200, Martin André wrote:
>> > Hellooo,
>> >
>> > I'd like to propose we extend Florian Fuchs +2 powers to the
>> > tripleo-validations project. Florian is already core on tripleo-ui
>> > (well, tripleo technically so this means there is no changes to make
>> > to gerrit groups).
>> >
>> > Florian took over many of the stalled patches in tripleo-validations
>> > and is now the principal contributor in the project [1]. He has built
>> > a good expertise over the last months and I think it's time he has
>> > officially the right to approve changes in tripleo-validations.
>> >
>> > Consider this my +1 vote.
>>
>> +1
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> What do we have to do to make this official?

done

> -J
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][glance][ironic][octavia][nova]oslo.config 4.0 will break projects' unit test

2017-04-18 Thread Matt Riedemann

On 4/17/2017 9:08 PM, ChangBo Guo wrote:

Nova:  https://review.openstack.org/457188  need review


This change introduced an intermittent failure in one of the tests:

https://bugs.launchpad.net/nova/+bug/1683953

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Florian Fuchs for tripleo-validations core

2017-04-18 Thread Jason E. Rist
On 04/18/2017 02:28 AM, Steven Hardy wrote:
> On Thu, Apr 06, 2017 at 11:53:04AM +0200, Martin André wrote:
> > Hellooo,
> >
> > I'd like to propose we extend Florian Fuchs +2 powers to the
> > tripleo-validations project. Florian is already core on tripleo-ui
> > (well, tripleo technically so this means there is no changes to make
> > to gerrit groups).
> >
> > Florian took over many of the stalled patches in tripleo-validations
> > and is now the principal contributor in the project [1]. He has built
> > a good expertise over the last months and I think it's time he has
> > officially the right to approve changes in tripleo-validations.
> >
> > Consider this my +1 vote.
>
> +1
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
What do we have to do to make this official?

-J

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] notification update week 15

2017-04-18 Thread Matt Riedemann

On 4/18/2017 3:55 AM, Balazs Gibizer wrote:

Hi,

Here is the status update / focus setting mail about notification work
for week 15.

Bugs

[Medium] https://bugs.launchpad.net/nova/+bug/1657428 The instance
notifications are sent with inconsistent timestamp format. We need to
iterate more on the solution I updated last week.
https://review.openstack.org/#/c/421981


Versioned notification transformation
-
The volume_attach and detach notifications are still in focus to support
Searchlight switching to the versioned notifications. Both are ready for
core review:
* https://review.openstack.org/#/c/401992/ Transform
instance.volume_attach notification
* https://review.openstack.org/#/c/408676/ Transform
instance.volume_detach notification


Searchlight integration
---
changing Searchlight to use versioned notifications
~~~
https://blueprints.launchpad.net/searchlight/+spec/nova-versioned-notifications

bp is a hard dependency for the integration work.

The path on Searchlight side https://review.openstack.org/#/c/453352/
evolves
gradually.


Cool, this is a pleasant surprise.




bp additional-notification-fields-for-searchlight
~
Besides volume_attach and volume_detach we need the following patches to
help Searchlight integration:
https://review.openstack.org/#/q/status:open+topic:bp/additional-notification-fields-for-searchlight


The debate about adding the tags to the instance payload now seems to
moving to the direction of adding the tags only to instance.create and
instance.update. This means that the related patch needs to be reworked.


An update from the notifications meeting today, we agreed to just put 
instance.tags on the instance.update notifications for now. Kevin Zheng 
has a blueprint to tag an instance during create, and we'll add 
instance.tags to the instance.create notification as part of that change 
(including creating a new payload for the instance.create 
notifications). gibi is going to take over the patch to put 
instance.tags into the instance.update patch.




The auto_disk_config patch only needs a second +2
https://review.openstack.org/#/c/419185/


I left some comments on this one today, but my concern is mostly the 
same old story now about putting more fields in the base instance action 
payload that means it has to go into every instance action payload, but 
it's really only used in a few operations from what I can tell (create, 
resize, rebuild, and maybe a couple of others like rescue and unshelve - 
I suppose migrations too). I'm less concerned about that one though 
since it does not require lazy-loading fields on the instance object 
(like we had to do with instance.tags).




The author of the keypairs patch
https://review.openstack.org/#/c/419730/ Anusha told me that she was
moved away from OpenStack so we need somebody to pick this patch up and
rebase it.


I've asked Kevin Zheng to see if he can take over this one or any others 
in this blueprint series that need help now.




We still need to work on the https://review.openstack.org/#/c/448779/
(Add BDM to InstancePayload) path from testing point of view.


Other items
---
No progress from last week, so see the last maill
http://lists.openstack.org/pipermail/openstack-dev/2017-April/115179.html for
details.

Weekly meeting
--
The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC
on openstack-meeting-4 so the next meeting will be held on 18th of
April.
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170418T17

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-18 Thread Michał Jastrzębski
On 18 April 2017 at 13:54, Doug Hellmann  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-04-18 13:37:30 -0700:
>> On 18 April 2017 at 12:41, Doug Hellmann  wrote:
>> > Excerpts from Steve Baker's message of 2017-04-18 10:46:43 +1200:
>> >> On Tue, Apr 18, 2017 at 9:57 AM, Doug Hellmann 
>> >> wrote:
>> >>
>> >> > Excerpts from Michał Jastrzębski's message of 2017-04-12 15:59:34 -0700:
>> >> > > My dear Kollegues,
>> >> > >
>> >> > > Today we had discussion about how to properly name/tag images being
>> >> > > pushed to dockerhub. That moved towards general discussion on revision
>> >> > > mgmt.
>> >> > >
>> >> > > Problem we're trying to solve is this:
>> >> > > If you build/push images today, your tag is 4.0
>> >> > > if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
>> >> > > we tag new release.
>> >> > >
>> >> > > But image built today is not equal to image built tomorrow, so we
>> >> > > would like something like 4.0.0-1, 4.0.0-2.
>> >> > > While we can reasonably detect history of revisions in dockerhub,
>> >> > > local env will be extremely hard to do.
>> >> > >
>> >> > > I'd like to ask you for opinions on desired behavior and how we want
>> >> > > to deal with revision management in general.
>> >> > >
>> >> > > Cheers,
>> >> > > Michal
>> >> > >
>> >> >
>> >> > What's in the images, kolla? Other OpenStack components?
>> >>
>> >>
>> >> Yes, each image will typically contain all software required for one
>> >> OpenStack service, including dependencies from OpenStack projects or the
>> >> base OS. Installed via some combination of git, pip, rpm, deb.
>> >>
>> >> > Where does the
>> >> > 4.0.0 come from?
>> >> >
>> >> >
>> >> Its the python version string from the kolla project itself, so ultimately
>> >> I think pbr. I'm suggesting that we switch to using the
>> >> version.release_string[1] which will tag with the longer version we use 
>> >> for
>> >> other dev packages.
>> >>
>> >> [1]https://review.openstack.org/#/c/448380/1/kolla/common/config.py
>> >
>> > Why are you tagging the artifacts containing other projects with the
>> > version number of kolla, instead of their own version numbers and some
>> > sort of incremented build number?
>>
>> This is what we do in Kolla and I'd say logistics and simplicity of
>> implementation. Tags are more than just information for us. We have to
>
> But for a user consuming the image, they have no idea what version of
> nova is in it because the version on the image is tied to a different
> application entirely.

That's easy enough to check tho (just docker exec into container and
do pip freeze). On the other hand you'll have information that "this
set of various versions was tested together" which is arguably more
important.

>> deploy these images and we have to know a tag. Combine that with clear
>> separation of build phase from deployment phase (really build phase is
>> entirely optional thanks to dockerhub), you'll end up with either
>> automagical script that will have to somehow detect correct version
>> mix of containers that works with each other, or hand crafted list
>> that will have 100+ versions hardcoded.
>>
>> Incremental build is hard because builds are atomic and you never
>> really know how many times images were rebuilt (also local rebuilt vs
>> dockerhub-pushed rebuild will cause collisions in tags).
>>
>> > Doug
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] version document for project navigator

2017-04-18 Thread Jimmy McArthur

Thank you, sir! :)


Monty Taylor 
April 18, 2017 at 3:38 PM
I just sent out the email requesting folks send in patches. Maybe 
we'll get a flood of them now ...





__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Jimmy McArthur 
April 18, 2017 at 8:46 AM
All, we have modified our ingest tasks to look for this new data. Can 
we get an ETA on when to expect updates from the majority of projects? 
Right now, there isn't too much to test with.


Cheers,
Jimmy


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Thierry Carrez 
April 14, 2017 at 3:21 AM

OK approved.

Doug Hellmann 
April 13, 2017 at 11:43 AM

+1

The multi-file format was what the navigator team wanted, and there's
plenty of support for it among other reviewers. Let's move this forward.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Thierry Carrez 
April 13, 2017 at 11:03 AM

Do we really need the TC approval on this ? It's not a formal governance
change or anything.

Whoever has rights on that repo could approve it now and ask for
forgiveness later :)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-18 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-04-18 13:37:30 -0700:
> On 18 April 2017 at 12:41, Doug Hellmann  wrote:
> > Excerpts from Steve Baker's message of 2017-04-18 10:46:43 +1200:
> >> On Tue, Apr 18, 2017 at 9:57 AM, Doug Hellmann 
> >> wrote:
> >>
> >> > Excerpts from Michał Jastrzębski's message of 2017-04-12 15:59:34 -0700:
> >> > > My dear Kollegues,
> >> > >
> >> > > Today we had discussion about how to properly name/tag images being
> >> > > pushed to dockerhub. That moved towards general discussion on revision
> >> > > mgmt.
> >> > >
> >> > > Problem we're trying to solve is this:
> >> > > If you build/push images today, your tag is 4.0
> >> > > if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
> >> > > we tag new release.
> >> > >
> >> > > But image built today is not equal to image built tomorrow, so we
> >> > > would like something like 4.0.0-1, 4.0.0-2.
> >> > > While we can reasonably detect history of revisions in dockerhub,
> >> > > local env will be extremely hard to do.
> >> > >
> >> > > I'd like to ask you for opinions on desired behavior and how we want
> >> > > to deal with revision management in general.
> >> > >
> >> > > Cheers,
> >> > > Michal
> >> > >
> >> >
> >> > What's in the images, kolla? Other OpenStack components?
> >>
> >>
> >> Yes, each image will typically contain all software required for one
> >> OpenStack service, including dependencies from OpenStack projects or the
> >> base OS. Installed via some combination of git, pip, rpm, deb.
> >>
> >> > Where does the
> >> > 4.0.0 come from?
> >> >
> >> >
> >> Its the python version string from the kolla project itself, so ultimately
> >> I think pbr. I'm suggesting that we switch to using the
> >> version.release_string[1] which will tag with the longer version we use for
> >> other dev packages.
> >>
> >> [1]https://review.openstack.org/#/c/448380/1/kolla/common/config.py
> >
> > Why are you tagging the artifacts containing other projects with the
> > version number of kolla, instead of their own version numbers and some
> > sort of incremented build number?
> 
> This is what we do in Kolla and I'd say logistics and simplicity of
> implementation. Tags are more than just information for us. We have to

But for a user consuming the image, they have no idea what version of
nova is in it because the version on the image is tied to a different
application entirely.

> deploy these images and we have to know a tag. Combine that with clear
> separation of build phase from deployment phase (really build phase is
> entirely optional thanks to dockerhub), you'll end up with either
> automagical script that will have to somehow detect correct version
> mix of containers that works with each other, or hand crafted list
> that will have 100+ versions hardcoded.
> 
> Incremental build is hard because builds are atomic and you never
> really know how many times images were rebuilt (also local rebuilt vs
> dockerhub-pushed rebuild will cause collisions in tags).
> 
> > Doug
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] version document for project navigator

2017-04-18 Thread Monty Taylor
I just sent out the email requesting folks send in patches. Maybe we'll 
get a flood of them now ...


On 04/18/2017 08:46 AM, Jimmy McArthur wrote:

All, we have modified our ingest tasks to look for this new data. Can we
get an ETA on when to expect updates from the majority of projects?
Right now, there isn't too much to test with.

Cheers,
Jimmy


Thierry Carrez 
April 14, 2017 at 3:21 AM

OK approved.

Doug Hellmann 
April 13, 2017 at 11:43 AM

+1

The multi-file format was what the navigator team wanted, and there's
plenty of support for it among other reviewers. Let's move this forward.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Thierry Carrez 
April 13, 2017 at 11:03 AM

Do we really need the TC approval on this ? It's not a formal governance
change or anything.

Whoever has rights on that repo could approve it now and ask for
forgiveness later :)

Monty Taylor 
April 13, 2017 at 10:25 AM
On 04/13/2017 08:28 AM, Jimmy McArthur wrote:

Just checking on the progress of this. :)


Unfortunately a good portion of the TC was away this week at the
leadership training so getting a final ok on it was a bit stalled.
It's seeming like the multi-file version is the one most people like
though, so I'm mostly expecting that to be what we end up with. We
should be able to get final approval by Tuesday, and then can work on
getting all of the project info filled in.


Monty Taylor 
April 7, 2017 at 7:05 AM


There is a new repo now:

http://git.openstack.org/cgit/openstack/project-navigator-data

I have pushed up two different patches with two different approaches:

https://review.openstack.org/#/c/454691
https://review.openstack.org/#/c/454688

One is a single file per release. The other is a file per service per
release.

Benefits of the single-file are that it's a single file to pull and
parse.

Benefits of the multi-file approach are that projects can submit
documents for themselves as patches without fear of merge conflicts,
and that the format is actually _identical_ to the format for version
discovery from the API-WG, minus the links section.

I think I prefer the multi-file approach, but would be happy either
way.

__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Jimmy McArthur 
April 6, 2017 at 3:51 PM
Cool. Thanks Monty!


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Monty Taylor 
April 6, 2017 at 3:21 PM
On 04/06/2017 11:58 AM, Jimmy McArthur wrote:

Assuming this format is accepted, do you all have any sense of when
this
data will be complete for all projects?


Hopefully "soon" :)

Honestly, it's not terribly difficult data to produce, so once we're
happy with it and where it goes, crowdsourcing filling it all in
should go quickly.


Jimmy McArthur 
April 5, 2017 at 8:59 AM
FWIW, from my perspective on the Project Navigator side, this format
works great. We can actually derive the age of the project from this
information as well by identifying the first release that has API
data
for a particular project. I'm indifferent about where it lives, so
I'd
defer to you all to determine the best spot.

I really appreciate you all putting this together!

Jimmy


__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Thierry Carrez 
April 5, 2017 at 5:28 AM

Somehow missed this thread, so will repost here comments I made
elsewhere:

This looks good, but I would rather not overload the releases
repository. My personal preference (which was also expressed by
Doug in the TC meeting) would be to set this information up in a
"project-navigator" git repo that we would reuse for any
information we
need to collect from projects for accurate display on the project
navigator. If the data is not maintained anywhere else (or easily
derivable from existing data), we would use that repository to
collect
it from projects.

That way there is a clear place to 

Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-18 Thread Michał Jastrzębski
On 18 April 2017 at 12:41, Doug Hellmann  wrote:
> Excerpts from Steve Baker's message of 2017-04-18 10:46:43 +1200:
>> On Tue, Apr 18, 2017 at 9:57 AM, Doug Hellmann 
>> wrote:
>>
>> > Excerpts from Michał Jastrzębski's message of 2017-04-12 15:59:34 -0700:
>> > > My dear Kollegues,
>> > >
>> > > Today we had discussion about how to properly name/tag images being
>> > > pushed to dockerhub. That moved towards general discussion on revision
>> > > mgmt.
>> > >
>> > > Problem we're trying to solve is this:
>> > > If you build/push images today, your tag is 4.0
>> > > if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
>> > > we tag new release.
>> > >
>> > > But image built today is not equal to image built tomorrow, so we
>> > > would like something like 4.0.0-1, 4.0.0-2.
>> > > While we can reasonably detect history of revisions in dockerhub,
>> > > local env will be extremely hard to do.
>> > >
>> > > I'd like to ask you for opinions on desired behavior and how we want
>> > > to deal with revision management in general.
>> > >
>> > > Cheers,
>> > > Michal
>> > >
>> >
>> > What's in the images, kolla? Other OpenStack components?
>>
>>
>> Yes, each image will typically contain all software required for one
>> OpenStack service, including dependencies from OpenStack projects or the
>> base OS. Installed via some combination of git, pip, rpm, deb.
>>
>> > Where does the
>> > 4.0.0 come from?
>> >
>> >
>> Its the python version string from the kolla project itself, so ultimately
>> I think pbr. I'm suggesting that we switch to using the
>> version.release_string[1] which will tag with the longer version we use for
>> other dev packages.
>>
>> [1]https://review.openstack.org/#/c/448380/1/kolla/common/config.py
>
> Why are you tagging the artifacts containing other projects with the
> version number of kolla, instead of their own version numbers and some
> sort of incremented build number?

This is what we do in Kolla and I'd say logistics and simplicity of
implementation. Tags are more than just information for us. We have to
deploy these images and we have to know a tag. Combine that with clear
separation of build phase from deployment phase (really build phase is
entirely optional thanks to dockerhub), you'll end up with either
automagical script that will have to somehow detect correct version
mix of containers that works with each other, or hand crafted list
that will have 100+ versions hardcoded.

Incremental build is hard because builds are atomic and you never
really know how many times images were rebuilt (also local rebuilt vs
dockerhub-pushed rebuild will cause collisions in tags).

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [third-party-ci] pkvmci ironic job breakage details

2017-04-18 Thread Michael Turek

On 04/18/2017 07:56 AM, Vladyslav Drok wrote:

Hey Michael,

On Fri, Apr 14, 2017 at 6:51 PM, Michael Turek 
> wrote:


Hey ironic-ers,

So our third party CI job for ironic has been, and remains,
broken. I was able to do some investigation today and here's a
summary of what we're seeing. I'm hoping someone might know the
root of the problem.

For reference, please see this paste and the logs of the job that
I was working in:
http://paste.openstack.org/show/606564/


https://dal05.objectstorage.softlayer.net/v1/AUTH_3d8e6ecb-f597-448c-8ec2-164e9f710dd6/pkvmci/ironic/25/454625/10/check-ironic/tempest-dsvm-ironic-agent_ipmitool/0520958/



I've redacted the credentials in the ironic node-show for obvious
reasons but rest assured they are properly set. These commands are
run while
'/opt/stack/new/ironic/devstack/lib/ironic:wait_for_nova_resources'
is looping.

Basically, the ironic hypervisor for the node doesn't appear. As
well, none of the node's properties make it to the hypervisor stats.

Some more strangeness is that the 'count' value from the
'openstack hypervisor stats show'. Though no hypervisors appear,
the count is still 1. Since the run was broken, I decided to
delete node-0 (about 3-5 minutes before the run failed) and see if
it updated the count. It did.

Does anyone have any clue what might be happening here? Any advice
would be appreciated!


So the failure seems to be here -- 
https://dal05.objectstorage.softlayer.net/v1/AUTH_3d8e6ecb-f597-448c-8ec2-164e9f710dd6/pkvmci/ironic/25/454625/10/check-ironic/tempest-dsvm-ironic-agent_ipmitool/0520958/screen-ir-api.txt.gz, 
API and conductor are not able to communicate via RPC for some reason. 
Need to investigate this more. Do you mind filing a bug about this?



Thanks,
mjturek


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks vdrok,

Bug is opened here - https://bugs.launchpad.net/ironic/+bug/1683902
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][ptls][tc] help needed filling out project-navigator data

2017-04-18 Thread Monty Taylor

Hey everybody!

The Foundation is rolling out a new version of the Project Navigator. 
One of the things it contains is a section that shows API versions 
available for each project for each release. They asked the TC's help in 
providing that data, so we spun up a new repository:


  http://git.openstack.org/cgit/openstack/project-navigator-data

that the Project Navigator will consume.

We need your help!

The repo contains a file for each project for each release with 
CURRENT/SUPPORTED/DEPRECATED major versions and also microversion ranges 
if they exist. The data is pretty much exactly what everyone already 
produces in their version discovery documents - although it's normalized 
into the format described by the API-WG:



https://specs.openstack.org/openstack/api-wg/guidelines/microversion_specification.html#version-discovery

What would be really helpful is if someone from each project could go 
make a patch to the repo adding the historical (and currently) info for 
your project. We'll come up with a process for maintaining it over time 
- but for now just crowdsourcing the data seems like the best way.


The README file explains the format, and there is data from a few of the 
projects for Newton.


It would be great to include an entry for every release - which for many 
projects will just be the same content copied a bunch of times back to 
the first release the project was part of OpenStack.


This is only needed for service projects (something that registers in 
the keystone catalog) and is only needed for 'main' APIs (like, it is 
not needed, for now, to put in things like Placement)


If y'all could help - it would be super great!

Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][elections]questions about one platform vision

2017-04-18 Thread Zane Bitter

On 18/04/17 10:39, Flavio Percoco wrote:

On 16/04/17 09:03 +0100, Neil Jerram wrote:

FWIW, I think the Lego analogy is not actually helpful for another
reason: it has vastly too many ways of combining, and (hence) no sense
at all of consistency / interoperability between the different things
that you can construct with it. Whereas for OpenStack I believe you
are also aiming for some forms of consistency and interoperability.


I agree that this is another important limitation of the analogy.


Could you expand on why you think the lego analogy does not cover
consistency
and interoperability?


This is one of the interesting ways in which building an application 
with multiple independently-governed deployments (like OpenStack) 
differs from building a hosted service (like AWS).


So if you look at https://aws.amazon.com/products/ there is currently 
somewhere north of 90 services listed. Despite this, literally nobody 
ever says things like "Wow, that's so many services... do I have to use 
all of them?" Because it's obvious: as an application 
developer/deployer, you use the ones you need, you don't use the ones 
you don't, and there's no problem. (People, of course, *do* say things 
like "Wow, that's so many services, how do I find the one I need?" or 
"What does all this stuff do?" or "What do all these silly names mean?")


Now with OpenStack, operators look at 
https://www.openstack.org/software/project-navigator and say "Wow, 
that's so many services... do I have to use all of them?" And for the 
most part the answer ought to be equally obvious: install the ones that 
application developers deploying to your cloud need, and not the ones 
they don't. (Strangely, though, we don't often talk about application 
developer needs... OpenStack is largely marketed to operators of clouds, 
with the apparent assumption that application developers will use 
whatever they're given. IMHO this is a colossal mistake - it's the exact 
mechanism by which a lot of really *very* bad 'Enterprise' software has 
gotten developed over the years.)


However, there's a subtlety to it: with a single hosted service, all the 
stuff is already there, and if you want to start using it you just 
start. With multiple deployments, the thing you want may or may not be 
there, or it may be there on only some of the clouds you want to use, or 
it may not even get properly integrated at all.


So, for example, AWS services can all deliver notifications to SNS 
topics that application developers can reliably use to e.g. trigger 
Lambda functions when some event occurs. If you don't need that you 
don't use them, but when you do they'll be right there. (Azure and 
Google also have equivalent functionality.)


OpenStack has equivalents to SNS & Lambda for these purposes (Zaqar & 
Mistral), but they're not installed in most clouds. If you find you need 
them then you have to beg your cloud operator. Even if that works, 
you'll lose interoperability with most other OpenStack clouds by 
depending on them. And in any event, hardly any OpenStack services 
actually produce notifications for Zaqar anyway because most people 
assume that it's just a layered service that only a small class of 
applications might use, and not something that ought to be an integral 
part of any cloud.


Meanwhile in unrelated news, it's 2017 and Horizon still works by 
polling all of the things all of the time.


There certainly exists a bunch of stuff that you can just layer on - 
e.g. if you need Hadoop as a service you should probably install Sahara, 
and if you don't then it won't hurt at all if you don't. But the set of 
stuff that needs to be tightly integrated together and present in most 
deployments to yield an interoperable cloud service is considerably 
larger than just what you need to run a VPS, which is all that we've 
really acknowledged since the start of the 'big tent' debate.


If we treat _everything_ as just an independent building block on top of 
the core VPS functionality then we'll never develop the integrated core 
*cloud* functionality that we need to attract application developers to 
the platform by choice (and hopefully convert them into contributors!).


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][elections]questions about one platform vision

2017-04-18 Thread Doug Hellmann
Excerpts from Adam Lawson's message of 2017-04-18 10:33:32 -0700:
> My personal feeling:
> 
> We need to be very very careful. While I really respect Jay Pipes and his
> commentary, I fundamentally disagree with his toolbox mindset. OpenStack is
> one tool in the Enterprise toolbox. It isn't a toolbox. K8s is another tool
> in the toolbox since it's turning out to be much more than just a container
> management platform. Believe it or not AWS is another.
> 
> I've been an OpenStack architect for at least 5+ years now and work with
> many large Fortune 100 IT shops. OpenStack in the enterprise is being used
> to orchestrate virtual machines. Despite the additional capabilities
> OpenStack is trying to accommodate, that's basically it. At scale, that's
> what they're doing. Not many are orchestrating bare metal that I've seen or
> heard. And they are exploring K8s and Docker Swarm to orchestrate
> containers. They aren't looking at OpenStack to do that. I recently
> attended the K8s conference in Berlin this year and I'll tell you, the
> container community is not looking at OpenStack as the means to manage
> containers. If they are, they were likely sitting at the OpenStack booth.
> Additionally these enterprises are not going to use two platforms side by
> side with two means of orchestrating resources. That's both unrealistic and
> understandable. Shoe-horning K8s into an OpenStack model really underserves
> the container user space.
> 
> OpenStack's approach is to treat K8s as a tool.
> K8s is working to classify OpenStack as a tool.
> 
> So to me we're one of two - maybe one of three solid FOSS cloud platforms -
> not including Azure and AWS which are both trending up in consumer adoption
> again. All of these are aiming to orchestrate the same resources and in
> different ways, they each do it very well. A One Platform vision coming
> from the minds within one of those projects creates unnecessary friction
> and sounds a little small-minded. Big world out there - we're not the only
> player.

I won't speak for anyone else, but when I say that OpenStack should
be "one thing" I definitely don't mean "the only thing." What I
mean is that someone selecting several OpenStack components to use
to build their stack should have those components behave in similar
ways, and use similar deployment and end-user patterns so that once
someone figures out one component the next is easier to figure out.
It would be great if those components were able to integrate directly
with other tools not produced by the community (serving volumes to
containers with Cinder or authenticating non-OpenStack apps with
Keystone, for example), but that's secondary to the components
produced by our community actually working together, in my mind.

Doug

> 
> In the end I guess I'm trying to say that we need to be careful when we
> make assertions because this vision sounds like we're drinking too much of
> our own Kool-Aid. When we assume our platform orchestrates the heap, we
> need to understand there are several other heaps getting bigger and do
> things OpenStack can't. If we buy into a marketing vision, we start a
> downward path towards where Eucalyptus and CloudStack are today.
> 
> Just my oh, 3 cents worth. ; )
> 
> //adam
> 
> 
> *Adam Lawson*
> 
> Principal Architect
> Office: +1-916-794-5706 <(916)%20794-5706>
> 
> On Tue, Apr 18, 2017 at 7:39 AM, Flavio Percoco  wrote:
> 
> > On 16/04/17 09:03 +0100, Neil Jerram wrote:
> >
> >> FWIW, I think the Lego analogy is not actually helpful for another
> >> reason: it has vastly too many ways of combining, and (hence) no sense at
> >> all of consistency / interoperability between the different things that you
> >> can construct with it. Whereas for OpenStack I believe you are also aiming
> >> for some forms of consistency and interoperability.
> >>
> >
> > Could you expand on why you think the lego analogy does not cover
> > consistency
> > and interoperability?
> >
> >
> > Flavio
> >
> > --
> > @flaper87
> > Flavio Percoco
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-18 Thread Doug Hellmann
Excerpts from Steve Baker's message of 2017-04-18 10:46:43 +1200:
> On Tue, Apr 18, 2017 at 9:57 AM, Doug Hellmann 
> wrote:
> 
> > Excerpts from Michał Jastrzębski's message of 2017-04-12 15:59:34 -0700:
> > > My dear Kollegues,
> > >
> > > Today we had discussion about how to properly name/tag images being
> > > pushed to dockerhub. That moved towards general discussion on revision
> > > mgmt.
> > >
> > > Problem we're trying to solve is this:
> > > If you build/push images today, your tag is 4.0
> > > if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
> > > we tag new release.
> > >
> > > But image built today is not equal to image built tomorrow, so we
> > > would like something like 4.0.0-1, 4.0.0-2.
> > > While we can reasonably detect history of revisions in dockerhub,
> > > local env will be extremely hard to do.
> > >
> > > I'd like to ask you for opinions on desired behavior and how we want
> > > to deal with revision management in general.
> > >
> > > Cheers,
> > > Michal
> > >
> >
> > What's in the images, kolla? Other OpenStack components?
> 
> 
> Yes, each image will typically contain all software required for one
> OpenStack service, including dependencies from OpenStack projects or the
> base OS. Installed via some combination of git, pip, rpm, deb.
> 
> > Where does the
> > 4.0.0 come from?
> >
> >
> Its the python version string from the kolla project itself, so ultimately
> I think pbr. I'm suggesting that we switch to using the
> version.release_string[1] which will tag with the longer version we use for
> other dev packages.
> 
> [1]https://review.openstack.org/#/c/448380/1/kolla/common/config.py

Why are you tagging the artifacts containing other projects with the
version number of kolla, instead of their own version numbers and some
sort of incremented build number?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-18 Thread Fox, Kevin M
Yes. within a stable branch, the kolla code hardly ever changes. but nova may 
put out a new release, or openssl may put out a new release.

From: Michał Jastrzębski [inc...@gmail.com]
Sent: Tuesday, April 18, 2017 9:19 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

Our issue is a bit complex tho. Dockerhub-pushed images are less
affected by version of our code than versions of everyone else's code.

On 18 April 2017 at 07:36, Flavio Percoco  wrote:
> On 13/04/17 13:48 +1200, Steve Baker wrote:
>>
>> On Thu, Apr 13, 2017 at 10:59 AM, Michał Jastrzębski 
>> wrote:
>>
>>> My dear Kollegues,
>>>
>>> Today we had discussion about how to properly name/tag images being
>>> pushed to dockerhub. That moved towards general discussion on revision
>>> mgmt.
>>>
>>> Problem we're trying to solve is this:
>>> If you build/push images today, your tag is 4.0
>>> if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
>>> we tag new release.
>>>
>>> But image built today is not equal to image built tomorrow, so we
>>> would like something like 4.0.0-1, 4.0.0-2.
>>> While we can reasonably detect history of revisions in dockerhub,
>>> local env will be extremely hard to do.
>>>
>>> I'd like to ask you for opinions on desired behavior and how we want
>>> to deal with revision management in general.
>>>
>>>
>> I already have a change which proposes tagging images with a pbr built
>> version [1]. I think if users want tags which are stable for the duration
>> of a major release they should switch to using the tag specified by
>> kolla-build.conf base_tag, which can be set to latest, ocata, pike, etc.
>> This would leave the version tag to at least track changes to the kolla
>> repo itself. Since the contents of upstream kolla images come from such
>> diverse sources, all I could suggest to ensure unique tags are created for
>> unique images is to append a datestamp to [1] (or have an extra datestamp
>> based tag). Bonus points for only publishing a new datestamp tag if the
>> contents of the image really changes.
>>
>> In the RDO openstack-kolla package we now tag images with the
>> {Version}-{Release} of the openstack-kolla package which built it[2]. I
>> realise this doesn't solve the problem of the tag needing to change when
>> other image contents need to be updated, but I believe this can be solved
>> within the RDO image build pipeline by incrementing the {Release} whenever
>> a new image needs to be published.
>>
>> [1] https://review.openstack.org/#/c/448380/
>> [2] https://review.rdoproject.org/r/#/c/5923/1/openstack-kolla.spec
>
>
> I like this option better because it's more consistent with how things are
> done
> elsewhere in OpenStack.
>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][swg]The 2019 TC Vision Draft - anonymous feedback now open

2017-04-18 Thread Colette Alexander
Hello Stackers,

By now, you all should have received an invitation to comment
anonymously[0] on the current draft of a TC Vision for 2019 that is up for
review in the governance repository[1].

We're going to leave the survey open for comment until the Boston Forum,
but I'd like to start incorporating feedback earlier by sending data and
information directly to the TC and setting aside some discussion time for
themes in the feedback in the next few TC meetings.

That means a couple of things:
1) Getting your feedback in sooner rather than later means it has a higher
likelihood of being integrated into the vision.
2) Discussion around vision feedback will occur in TC meetings, and then at
our Forum discussion on the vision[2]

We'd love to hear from you in both the survey/review forms as well as
during these discussions. Please stay tuned to the next few TC meetings,
and join us in Boston if you can. If you'd like to discuss the vision work
with TC members, you can also join #openstack-swg and ping people there.

Thanks so much!

-colette/gothicmindfood


[0] If you didn't, here's the link: http://www.surveygizmo.com/s3/
3490299/TC-2019-Vision
[1] Here's the Gerrit review: https://review.openstack.org/#/c/453262/
[2] https://www.openstack.org/summit/boston-2017/summit-
schedule/events/17585/the-openstack-technical-committee-
vision-for-2019-updates-stories-q-and-a
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-18 Thread Stephen Hindle
On Tue, Apr 18, 2017 at 9:27 AM, Fox, Kevin M  wrote:
> Kolla's general target is in the node range of 1 - 100 nodes. A lot of
> smaller clusters can't afford to do ci/cd of their own and so Kolla
> currently pushes all the work to the site, and the site can't afford to do
> it right, and so just doesn't deal with security updates.
>

I agree - as more and more of the '10' end of the '1-100' nodes start
using kolla to experiment with OpenStack, having pre-baked images with
security/patches/etc. becomes more important...

-- 
Stephen Hindle - Senior Systems Engineer
480.807.8189 480.807.8189
www.limelight.com Delivering Faster Better

Join the conversation

at Limelight Connect

-- 
The information in this message may be confidential.  It is intended solely 
for
the addressee(s).  If you are not the intended recipient, any disclosure,
copying or distribution of the message, or any action or omission taken by 
you
in reliance on it, is prohibited and may be unlawful.  Please immediately
contact the sender if you have received this message in error.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Bridging the production/CI workflow gap with large periodic CI jobs

2017-04-18 Thread Emilien Macchi
On Mon, Apr 17, 2017 at 3:52 PM, Justin Kilpatrick  wrote:
> Because CI jobs tend to max out about 5 nodes there's a whole class of
> minor bugs that make it into releases.
>
> What happens is that they never show up in small clouds, then when
> they do show up in larger testing clouds the people deploying those
> simply work around the issue and get onto what they where supposed to
> be testing. These workarounds do get documented/BZ'd but since they
> don't block anyone and only show up in large environments they become
> hard for developers to fix.
>
> So the issue gets stuck in limbo, with nowhere to test a patchset and
> no one owning the issue.
>
> These issues pile up and pretty soon there is a significant difference
> between the default documented workflow and the 'scale' workflow which
> is filled with workarounds which may or may not be documented
> upstream.
>
> I'd like to propose getting these issues more visibility to having a
> periodic upstream job that uses 20-30 ovb instances to do a larger
> deployment. Maybe at 3am on a Sunday or some other time where there's
> idle execution capability to exploit. The goal being to make these
> sorts of issues more visible and hopefully get better at fixing them.

Wait no, I know some folks at 3am on a Saturday night who use TripleO
CI (ok that was a joke).

> To be honest I'm not sure this is the best solution, but I'm seeing
> this anti pattern across several issues and I think we should try and
> come up with a solution.
>

Yes this proposal is really cool. There is an alternative to run this
periodic scenario outside TripleO CI and send results via email maybe.
But it is something we need to discuss with RDO Cloud people and see
if we would have such resources to make it on a weekly frequency.

Thanks for bringing this up, it's crucial for us to have this kind of
feedback, now let's take actions.
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][elections]questions about one platform vision

2017-04-18 Thread Adam Lawson
My personal feeling:

We need to be very very careful. While I really respect Jay Pipes and his
commentary, I fundamentally disagree with his toolbox mindset. OpenStack is
one tool in the Enterprise toolbox. It isn't a toolbox. K8s is another tool
in the toolbox since it's turning out to be much more than just a container
management platform. Believe it or not AWS is another.

I've been an OpenStack architect for at least 5+ years now and work with
many large Fortune 100 IT shops. OpenStack in the enterprise is being used
to orchestrate virtual machines. Despite the additional capabilities
OpenStack is trying to accommodate, that's basically it. At scale, that's
what they're doing. Not many are orchestrating bare metal that I've seen or
heard. And they are exploring K8s and Docker Swarm to orchestrate
containers. They aren't looking at OpenStack to do that. I recently
attended the K8s conference in Berlin this year and I'll tell you, the
container community is not looking at OpenStack as the means to manage
containers. If they are, they were likely sitting at the OpenStack booth.
Additionally these enterprises are not going to use two platforms side by
side with two means of orchestrating resources. That's both unrealistic and
understandable. Shoe-horning K8s into an OpenStack model really underserves
the container user space.

OpenStack's approach is to treat K8s as a tool.
K8s is working to classify OpenStack as a tool.

So to me we're one of two - maybe one of three solid FOSS cloud platforms -
not including Azure and AWS which are both trending up in consumer adoption
again. All of these are aiming to orchestrate the same resources and in
different ways, they each do it very well. A One Platform vision coming
from the minds within one of those projects creates unnecessary friction
and sounds a little small-minded. Big world out there - we're not the only
player.

In the end I guess I'm trying to say that we need to be careful when we
make assertions because this vision sounds like we're drinking too much of
our own Kool-Aid. When we assume our platform orchestrates the heap, we
need to understand there are several other heaps getting bigger and do
things OpenStack can't. If we buy into a marketing vision, we start a
downward path towards where Eucalyptus and CloudStack are today.

Just my oh, 3 cents worth. ; )

//adam


*Adam Lawson*

Principal Architect
Office: +1-916-794-5706 <(916)%20794-5706>

On Tue, Apr 18, 2017 at 7:39 AM, Flavio Percoco  wrote:

> On 16/04/17 09:03 +0100, Neil Jerram wrote:
>
>> FWIW, I think the Lego analogy is not actually helpful for another
>> reason: it has vastly too many ways of combining, and (hence) no sense at
>> all of consistency / interoperability between the different things that you
>> can construct with it. Whereas for OpenStack I believe you are also aiming
>> for some forms of consistency and interoperability.
>>
>
> Could you expand on why you think the lego analogy does not cover
> consistency
> and interoperability?
>
>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] [patrole] Question regarding patrole release management

2017-04-18 Thread MONTEIRO, FELIPE C
Hi all,

I have a question regarding patrole and release management. Many projects like 
heat or murano have a tempest plugin within their repos, so by extension their 
tempest plugins have releases, as the projects change over time. However, since 
patrole is just a tempest plugin, yet heavily reliant on tempest, how should 
patrole do release management? Am I correct in thinking that it should, in the 
first place? With nova-network and other APIs slated for deprecation in Pike 
and beyond, Patrole will logically have to continuously be maintained to keep 
up, meaning that older tests, just like with Tempest, will have to be phased 
out. If Patrole, then, does not have releases, then older release-dependent 
tests and functionality will over time be lost.

Thank You,
Felipe Monteiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-18 Thread Fox, Kevin M
Kolla's general target is in the node range of 1 - 100 nodes. A lot of smaller 
clusters can't afford to do ci/cd of their own and so Kolla currently pushes 
all the work to the site, and the site can't afford to do it right, and so just 
doesn't deal with security updates.

This is akin to a site forcing super secure passwords and so all the users just 
write the password on a sticky note and put it under their keyboard. The ideal 
is extra security, but the reality is less security.

Keeping up to date containers with security updates applied  and trackable 
helps solve a real issue. I agree, it is better to do it at your own site, if 
you can afford it, but like most things security, its not a bool, secure or 
not, its a spectrum. Providing tagged up to date images on the hub strengthens 
the security of a lot of openstacks.

It also lowers the hugely high bar of getting a production system workable. You 
don't have to build all the containers from the get go to get a system going. 
You can do that as you get time to do so.

Thanks,
Kevin


From: Jeffrey Zhang [zhang.lei@gmail.com]
Sent: Monday, April 17, 2017 7:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

I think we have two topics and improvements here

1. images in https://hub.docker.com/r/kolla/
2. tag in end-user env.

# images in hub.docker.com

we are building kolla tag image and push them into 
hub.docker.com. After this,
we do nothing for these images.

The issue is

1. any security update is not included in these images.
   solution: I do not think use 4.0.0-1 4.0.0-2 in 
hub.docker.com is a good idea.
   if so, we need mark what 4.0.0-1 container and what's the difference with 
4.0.0-2.
   This will make another chaos.
   And any prod env shouldn't depend on hub.docker.com's 
images, which is vulnerable
   to attack and is mutable.

2. branch images are not pushed.
   solution: we can add a job to push branch images into 
hub.docker.com like inc0
   said. For example:
   centos-source-nova-api:4.0.0
   centos-source-nova-api:ocata
   centos-source-nova-api:pike
   centos-source-nova-api:master
   But branch tag images is not stable ( even its name is stable/ocata ), users 
are
   not recommended to use these images

# images in end-user env

I recommended end user should build its own image rather then use 
hub.docker.com directly.
in my env, I build images with following tag rule.

when using 4.0.0 to build multi time, i use different tag name. For example
   1st: 4.0.0.1
   2nd: 4.0.0.2
   3rd: 4.0.0.3
   ...

The advantage in this way is: keep each tag as immutable ( never override )

On Tue, Apr 18, 2017 at 6:46 AM, Steve Baker 
> wrote:


On Tue, Apr 18, 2017 at 9:57 AM, Doug Hellmann 
> wrote:
Excerpts from Michał Jastrzębski's message of 2017-04-12 15:59:34 -0700:
> My dear Kollegues,
>
> Today we had discussion about how to properly name/tag images being
> pushed to dockerhub. That moved towards general discussion on revision
> mgmt.
>
> Problem we're trying to solve is this:
> If you build/push images today, your tag is 4.0
> if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
> we tag new release.
>
> But image built today is not equal to image built tomorrow, so we
> would like something like 4.0.0-1, 4.0.0-2.
> While we can reasonably detect history of revisions in dockerhub,
> local env will be extremely hard to do.
>
> I'd like to ask you for opinions on desired behavior and how we want
> to deal with revision management in general.
>
> Cheers,
> Michal
>

What's in the images, kolla? Other OpenStack components?

Yes, each image will typically contain all software required for one OpenStack 
service, including dependencies from OpenStack projects or the base OS. 
Installed via some combination of git, pip, rpm, deb.

Where does the
4.0.0 come from?


Its the python version string from the kolla project itself, so ultimately I 
think pbr. I'm suggesting that we switch to using the version.release_string[1] 
which will tag with the longer version we use for other dev packages.

[1]https://review.openstack.org/#/c/448380/1/kolla/common/config.py


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack 

Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-18 Thread Fox, Kevin M
I think we need to meet the following:

 1. provide up to date containers including if the dependencies changed without 
the main package changed.
 2. trackable changes. If a dep in a container changed, the tag should be 
different. This allows users to know that something changed.
 3. atomic changes. The version on their system now vs the new version should 
be easily separable so roll forward/rollback is supported in case something 
goes wrong.
 4. minimal changes. The revision should be bumped only if something did 
change, not just was rebuilt but nothing changed.
 5. tested changes. The gate jobs should be run to successful completion before 
released.

With this arrangement, and with a similar arrangement for helm package 
revisioning, the user can:

helm upgrade --name nova-compute kolla/nova-compute

And if any of the containers changed, it will perform a rolling upgrade.

Something that can be easily put in a cron job and should have minimal impact.

:ocata or 4.0.0 doesn't meet #2 and #3. I'm ok with having those tags in 
addition to revisions to support folks that dont want to follow the pattern 
above. In that case, they would just point to the newest revision.

Thanks,
Kevin


From: Michał Jastrzębski [inc...@gmail.com]
Sent: Monday, April 17, 2017 8:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

So, while I agree that everyone should use images built locally, I
also see what is value of downloading dockerhub-hosted images (besides
speed). Images at dockerhub passed our gates and are CI tested. Which
means quality of these images is ensured by our CI. Not everyone can
afford to have CI/CD system in their infra, so for small/medium
installation this actually might be more stable than local builds.

Given that both local and dockerhub hosted images have valid
production use case, I'd like that we keep our tagging mechanism same
for both.
That makes revisions per se impossible to handle (4.0.0-3 on dockerhub
doesn't mean 4.0.0-3 locally). Also how often we push 4.0.0-x? Daily
for quickest update on security patches (preferably for me), that
would mean that our dockerhub registry would grow extremely fast if
we'd like to retain every revision. One idea would be to put a little
of this weight on users (sorry!). We could upload daily :ocata images
and delete old ones. I think not many people does daily openstack
upgrades (cool use case tho), that means they will have some stale
:ocata image locally. We can just put an expectation to backup your
images locally, ones that you actually use. Just tar.gz
/var/lib/docker/volumes/registry/_data and save it somewhere safe.
Then you can always come back to it.

Bottom line, my suggestion for now is to have schema like that:

image-name:4.0.0 -> corresponding docker tag and image built (pref
automatically) close to tagging date
image-name:ocata -> tip of ocata branch build daily - the freshest
code that past gates
image-name:master -> tip of master branch

To achieve fully repeatable builds, we would need to have 4.0.0 type
tagging (based on pbr+git tag), version manifesto generation (as
discussed on PTG) and mechanism to consume existing manifestos and
rebuild images with these exact versions (baring issues with repo
removing previous version...). That is quite a project in it's own...

Thoughts?

Cheers,
Michal

On 17 April 2017 at 19:43, Jeffrey Zhang  wrote:
> I think we have two topics and improvements here
>
> 1. images in https://hub.docker.com/r/kolla/
> 2. tag in end-user env.
>
> # images in hub.docker.com
>
> we are building kolla tag image and push them into hub.docker.com. After
> this,
> we do nothing for these images.
>
> The issue is
>
> 1. any security update is not included in these images.
>solution: I do not think use 4.0.0-1 4.0.0-2 in hub.docker.com is a good
> idea.
>if so, we need mark what 4.0.0-1 container and what's the difference with
> 4.0.0-2.
>This will make another chaos.
>And any prod env shouldn't depend on hub.docker.com's images, which is
> vulnerable
>to attack and is mutable.
>
> 2. branch images are not pushed.
>solution: we can add a job to push branch images into hub.docker.com like
> inc0
>said. For example:
>centos-source-nova-api:4.0.0
>centos-source-nova-api:ocata
>centos-source-nova-api:pike
>centos-source-nova-api:master
>But branch tag images is not stable ( even its name is stable/ocata ),
> users are
>not recommended to use these images
>
> # images in end-user env


> I recommended end user should build its own image rather then use
> hub.docker.com directly.
> in my env, I build images with following tag rule.
>
> when using 4.0.0 to build multi time, i use different tag name. For example
>1st: 4.0.0.1
>2nd: 4.0.0.2
>3rd: 4.0.0.3
>...
>
> The advantage in this way is: keep 

[openstack-dev] [Congress] Tempest test (congress-datasources)

2017-04-18 Thread Carmine Annunziata
Carmine
-- Messaggio inoltrato --
Da: "Carmine Annunziata" 
Data: 16 apr 2017 18:21
Oggetto: Tempest test (congress-datasources)
A: "Tim Hinrichs" 
Cc: "Eric K" 

Hi guys,
I'm trying to write the tempest test for magnum in the
scenario/congress_datasources dir. Which test_driver may I use for base off
of?

Thank you,
-- 
Carmine
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-18 Thread Michał Jastrzębski
Our issue is a bit complex tho. Dockerhub-pushed images are less
affected by version of our code than versions of everyone else's code.

On 18 April 2017 at 07:36, Flavio Percoco  wrote:
> On 13/04/17 13:48 +1200, Steve Baker wrote:
>>
>> On Thu, Apr 13, 2017 at 10:59 AM, Michał Jastrzębski 
>> wrote:
>>
>>> My dear Kollegues,
>>>
>>> Today we had discussion about how to properly name/tag images being
>>> pushed to dockerhub. That moved towards general discussion on revision
>>> mgmt.
>>>
>>> Problem we're trying to solve is this:
>>> If you build/push images today, your tag is 4.0
>>> if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
>>> we tag new release.
>>>
>>> But image built today is not equal to image built tomorrow, so we
>>> would like something like 4.0.0-1, 4.0.0-2.
>>> While we can reasonably detect history of revisions in dockerhub,
>>> local env will be extremely hard to do.
>>>
>>> I'd like to ask you for opinions on desired behavior and how we want
>>> to deal with revision management in general.
>>>
>>>
>> I already have a change which proposes tagging images with a pbr built
>> version [1]. I think if users want tags which are stable for the duration
>> of a major release they should switch to using the tag specified by
>> kolla-build.conf base_tag, which can be set to latest, ocata, pike, etc.
>> This would leave the version tag to at least track changes to the kolla
>> repo itself. Since the contents of upstream kolla images come from such
>> diverse sources, all I could suggest to ensure unique tags are created for
>> unique images is to append a datestamp to [1] (or have an extra datestamp
>> based tag). Bonus points for only publishing a new datestamp tag if the
>> contents of the image really changes.
>>
>> In the RDO openstack-kolla package we now tag images with the
>> {Version}-{Release} of the openstack-kolla package which built it[2]. I
>> realise this doesn't solve the problem of the tag needing to change when
>> other image contents need to be updated, but I believe this can be solved
>> within the RDO image build pipeline by incrementing the {Release} whenever
>> a new image needs to be published.
>>
>> [1] https://review.openstack.org/#/c/448380/
>> [2] https://review.rdoproject.org/r/#/c/5923/1/openstack-kolla.spec
>
>
> I like this option better because it's more consistent with how things are
> done
> elsewhere in OpenStack.
>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][infra][pbr] Nominating Stephen Finucane for pbr-core

2017-04-18 Thread Jeremy Stanley
On 2017-04-12 08:14:31 -0500 (-0500), Monty Taylor wrote:
[...]
> Recently Stephen Finucane (sfinucan) has stepped up to the plate
> to help sort out issues we've been having. He's shown a lack of
> fear of the codebase and an understanding of what's going on. He's
> also following through on patches to projects themselves when
> needed, which is a huge part of the game. And most importantly he
> knows when to suggest we _not_ do something.
[...]

As an occasional pbr author and transitive core, I'm in favor. The
more help the better!
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement/resource providers update 19

2017-04-18 Thread Matt Riedemann

On 4/17/2017 2:49 PM, Matt Riedemann wrote:

I was wondering if we could re-use some of Yingxin's performance scale
testing work that he presented at the Newton summit [1]. Alex said
Yingxin is working on Ceph now, but the tooling should be on github
somewhere.


Found it:

https://github.com/cyx1231st/nova-scheduler-bench

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-04-18 Thread gordon chung


On 18/04/17 10:42 AM, Julien Danjou wrote:
> On Tue, Apr 18 2017, gordon chung wrote:
>
>> do we want it configurable? tbh, would anyway one configure it or know
>> how to configure it? even for us, we're just guessing somewhat.lol i'm
>> going to leave it static for now.
>
> I think we want it to be configurable, though most people would probably
> not tweak it. But I can imagine some setups where increasing it would
> make sure that.
> There's some sense of exposing it anyway, even if it does not change
> much. For example, we never exposed TASKS_PER_WORKER but in the end it
> seems the 16 value is not optimal. But since we did not expose it,
> there's barely no way for tester to try to tweak it and see what value
> works best. :)

well argued. take it :)

>
> You're completely right, we needed to discuss that anyway. All your
> patches version and tries build up our knowledge and expertise on the
> subject, so it was definitely worth the effort, and kudos go to you for
> taking that job!
>
> What will probably make you go back to hashring?

i think your argument that hashring can be configured effectively to 
"work on everything" was a good argument.

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-04-18 Thread gordon chung


On 18/04/17 10:42 AM, Julien Danjou wrote:
> On Tue, Apr 18 2017, gordon chung wrote:
>
>> do we want it configurable? tbh, would anyway one configure it or know
>> how to configure it? even for us, we're just guessing somewhat.lol i'm
>> going to leave it static for now.
>
> I think we want it to be configurable, though most people would probably
> not tweak it. But I can imagine some setups where increasing it would
> make sure that.
> There's some sense of exposing it anyway, even if it does not change
> much. For example, we never exposed TASKS_PER_WORKER but in the end it
> seems the 16 value is not optimal. But since we did not expose it,
> there's barely no way for tester to try to tweak it and see what value
> works best. :)

well argued. take it :)

>
> You're completely right, we needed to discuss that anyway. All your
> patches version and tries build up our knowledge and expertise on the
> subject, so it was definitely worth the effort, and kudos go to you for
> taking that job!
>
> What will probably make you go back to hashring?

i think your argument that hashring can be configured effectively to 
"work on everything" was a good argument.

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][quickstart][RFE] supporting arbitrary OS type, review and feedback required

2017-04-18 Thread Bogdan Dobrelya
Hello owls and stackers.
As you know, tripleo-quickstart can run on only rhel/centos.
For those who want to try it on an arbitrary OS from a docker container,
I created that RFE [0] and an implementation [1]. Those are just minor
changes to the quickstart repo only but as well are enablers for the use
case, which is a nice to have. So please PTAL.

Note, it works for me with an additional (non related to this topic
though) ansible plays/inventory/vars wrapper [2] setup. You can safely
ignore the latter, it's quite a mess. Unless you want not only review
but try it as well! If, so please follow README and reach me directly
for questions.

[0] https://bugs.launchpad.net/tripleo/+bug/1676373
[1] https://review.openstack.org/#/q/topic:rfe1676373
[2] https://github.com/bogdando/oooq-warp/blob/master/oooq-warp.yaml

-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Bridging the production/CI workflow gap with large periodic CI jobs

2017-04-18 Thread Ben Nemec



On 04/17/2017 02:52 PM, Justin Kilpatrick wrote:

Because CI jobs tend to max out about 5 nodes there's a whole class of
minor bugs that make it into releases.

What happens is that they never show up in small clouds, then when
they do show up in larger testing clouds the people deploying those
simply work around the issue and get onto what they where supposed to
be testing. These workarounds do get documented/BZ'd but since they
don't block anyone and only show up in large environments they become
hard for developers to fix.

So the issue gets stuck in limbo, with nowhere to test a patchset and
no one owning the issue.

These issues pile up and pretty soon there is a significant difference
between the default documented workflow and the 'scale' workflow which
is filled with workarounds which may or may not be documented
upstream.

I'd like to propose getting these issues more visibility to having a
periodic upstream job that uses 20-30 ovb instances to do a larger
deployment. Maybe at 3am on a Sunday or some other time where there's
idle execution capability to exploit. The goal being to make these
sorts of issues more visible and hopefully get better at fixing them.

To be honest I'm not sure this is the best solution, but I'm seeing
this anti pattern across several issues and I think we should try and
come up with a solution.


I like this idea a lot, and I think we discussed it previously on IRC 
and worked through some potential issues with setting up such a job. 
One other thing that occurred to me since then is that deployments at 
scale generally require a larger undercloud than we have in CI. 
Unfortunately I'm not sure whether we can change that just for a 
periodic job.  There are a couple of potential workarounds for that, but 
they would add some complication so we'll need to keep that in mind.


Overall +1 to the idea though.  Larger scale deployments are clearly 
something we won't be able to run on every patch set so a periodic job 
seems like the right fit here.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] New resource implementation workflow

2017-04-18 Thread Norbert Illés

Hi,

Thanks for the advices! The HIDDEN status looks better for me as it 
completely hides the resource from the user and it won't be included in 
the documentation, but using this to status to mark the implementation 
of the resource incomplete feels like a misuse of this status.


So I believe, we should use the UNSUPPORTED attribute in this case
and implement the create and delete operations together + the relevant 
test then the update + the relevant test.


Cheers,
Norbert

On 04/12/2017 12:26 PM, Peter Razumovsky wrote:

I also remember that Heat has smth like 'hidden' in resource plugin
declaration. Usually it is used to hide deprecated resource types so
that new stacks with those can not be created but old ones can be at
least deleted. May be you could use that flag while developing until
you think that resource is already usable, although it might
complicate your own testing of those resources.


In addition, I advice you to use UNSUPPORTED status untill resource will
not be fully implemented. For details, suggest to read resource support
status page
.

2017-04-11 17:27 GMT+04:00 Pavlo Shchelokovskyy
>:

Hi Norbert,

my biggest concern with the workflow you've shown is that in the
meantime it would be possible to create undeletable stacks / stacks
that leave resources behind after being deleted. As the biggest
challenge is usually in updates (if it is not UpdateReplace) I'd
suggest implementing create and delete together. To ease development
you could start with only basic properties for the resource if it is
possible to figure out their set (with some sane defaults if those
are absent in API) and add more tunable resource properties later.

I also remember that Heat has smth like 'hidden' in resource plugin
declaration. Usually it is used to hide deprecated resource types so
that new stacks with those can not be created but old ones can be at
least deleted. May be you could use that flag while developing until
you think that resource is already usable, although it might
complicate your own testing of those resources.

Cheers,

Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com 

On Tue, Apr 11, 2017 at 3:33 PM, Norbert Illés
>
wrote:

Hi everyone,

Me and two of my colleagues are working on adding Neutron Trunk
support to Heat. One of us working on the resource
implementation, one on the unit tests and one on the functional
tests. But all of these looks like a big chunk of work so I'm
wondering how can we divide them into smaller parts.

One idea is to split them along life cycle methods (create,
update, delete, etc.), for example:
 * Implement the resource creation + the relevant unit tests +
the relevant functional tests, review and merge these
 * implementing the delete operation + the relevant unit tests +
the relevant functional tests, review and merge these
 * move on to implementing the update operation + tests... and
so on.

Lastly, when the last part of the code and tests merged, we can
document the new resource, create templates in the
heat-templates etc.

Is this workflow sounds feasible?

I mostly concerned about the fact that there will be a time
period when only a half-done feature merged into the Heat
codebase, and I'm not sure if this is acceptable?

Has anybody implemented a new resource with a team? I would love
to hear some experiences about how others have organized this
kind of work.

Cheers,
Norbert


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Best regards,
Peter Razumovsky



Re: [openstack-dev] [tc][elections]questions about one platform vision

2017-04-18 Thread Neil Jerram
On Tue, Apr 18, 2017 at 3:45 PM Flavio Percoco  wrote:

> On 16/04/17 09:03 +0100, Neil Jerram wrote:
> >FWIW, I think the Lego analogy is not actually helpful for another
> reason: it has vastly too many ways of combining, and (hence) no sense at
> all of consistency / interoperability between the different things that you
> can construct with it. Whereas for OpenStack I believe you are also aiming
> for some forms of consistency and interoperability.
>
> Could you expand on why you think the lego analogy does not cover
> consistency
> and interoperability?
>

Well, for example, because you can build either a house or a jellyfish out
of Lego, and I don't think there's anything about the relationship between
those two things that evokes the consistency that some folks are talking
about / looking for between OpenStack components.

Neil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] uwsgi for API services - RSN

2017-04-18 Thread Sean Dague
This is all merged now. If you run into any issues with real WSGI
running, please poke up in #openstack-qa and we'll see what we can to to
get things ironned out.

-Sean

On 04/18/2017 07:19 AM, Sean Dague wrote:
> Ok, the patch series has come together now, and
> https://review.openstack.org/#/c/456344/ remains the critical patch.
> 
> This introduces a new global config option: "WSGI_MODE", which will be
> either "uwsgi" or "mod_wsgi" (for the transition).
> 
> https://review.openstack.org/#/c/456717/6/lib/placement shows what it
> takes to make something that's current running under mod_wsgi to run
> under uwsgi in this model.
> 
> The intent is that uwsgi mode becomes primary RSN, as that provides a
> more consistent experience for development, and still exercises the API
> services as real wsgi applications.
> 
>   -Sean
> 
> On 04/13/2017 08:01 AM, Sean Dague wrote:
>> One of the many reasons for getting all our API services running wsgi
>> under a real webserver is to get out of the custom ports for all
>> services game. However, because of some of the limits of apache
>> mod_wsgi, we really haven't been able to do that in our development
>> enviroment. Plus, the moment we go to mod_wsgi for some services, the
>> entire development workflow for "change this library, refresh the
>> following services" changes dramatically.
>>
>> It would be better to have a consistent story here.
>>
>> So there is some early work up to use apache mod_proxy_uwsgi as the
>> listener, and uwsgi processes running under systemd for all the
>> services. These bind only to a unix local socket, not to a port.
>> https://review.openstack.org/#/c/456344/
>>
>> Early testing locally has been showing progress. We still need to prove
>> out a few things, but this should simplify a bunch of the setup. And
>> coming with systemd will converge us back to a more consistent
>> development workflow when updating common code in a project that has
>> both API services and workers.
>>
>> For projects that did the mod_wsgi thing in a devstack plugin, this is
>> going to require some adjustment. Exactly what is not yet clear, but
>> it's going to be worth following that patch.
>>
>>  -Sean
>>
> 
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][tripleo] early feedback about upgrades from ocata to pike

2017-04-18 Thread Emilien Macchi
I've been playing with TripleO uprades from Ocata to Pike and I've
seen that it doesn't work yet. We're currently stuck on Neutron bits:
https://bugs.launchpad.net/tripleo/+bug/1683835

After the upgrade, here are the versions of Neutron:

openstack-neutron-11.0.0-0.20170417080927.b49764c.el7.centos.noarch
(https://github.com/openstack/neutron/commits/b49764c)
openstack-neutron-lbaas-10.0.1-0.20170417085201.071bb03.el7.centos.noarch
(where 071bb03 is the latest commit)
python-neutron-11.0.0-0.20170417080927.b49764c.el7.centos.noarch
python-neutron-lbaas-10.0.1-0.20170417085201.071bb03.el7.centos.noarch
python-neutron-lib-1.3.0-0.20170320164115.cd5b476.el7.centos.noarch
python2-neutronclient-6.1.0-0.20170208193918.1a2820d.el7.centos.noarch

Any help on this one is very welcome,

Thanks!
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][elections]questions about one platform vision

2017-04-18 Thread Flavio Percoco

On 16/04/17 09:03 +0100, Neil Jerram wrote:

FWIW, I think the Lego analogy is not actually helpful for another reason: it 
has vastly too many ways of combining, and (hence) no sense at all of 
consistency / interoperability between the different things that you can 
construct with it. Whereas for OpenStack I believe you are also aiming for some 
forms of consistency and interoperability. 


Could you expand on why you think the lego analogy does not cover consistency
and interoperability?

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-04-18 Thread Julien Danjou
On Tue, Apr 18 2017, gordon chung wrote:

> do we want it configurable? tbh, would anyway one configure it or know 
> how to configure it? even for us, we're just guessing somewhat.lol i'm 
> going to leave it static for now.

I think we want it to be configurable, though most people would probably
not tweak it. But I can imagine some setups where increasing it would
make sure that.
There's some sense of exposing it anyway, even if it does not change
much. For example, we never exposed TASKS_PER_WORKER but in the end it
seems the 16 value is not optimal. But since we did not expose it,
there's barely no way for tester to try to tweak it and see what value
works best. :)

> this is not really free. choosing hashring means we will have idle 
> workers and more complexity of figuring out what each of the other 
> agents look like in group. it's a trade-off (especially considering how 
> few keys to nodes we have) which is why i brought up question.

You're right, it's a trade-off. I think we can go with the non-hashring
approach for now, that you implemented, and see how if/how we need to
evolve it. It seems to be better than the current scheduling anyway.

> i'll be honest, i'll probably still switch back to hashring... but want 
> to make sure we're not just thinking hashring only.

You're completely right, we needed to discuss that anyway. All your
patches version and tries build up our knowledge and expertise on the
subject, so it was definitely worth the effort, and kudos go to you for
taking that job!

What will probably make you go back to hashring?

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Tags, revisions, dockerhub

2017-04-18 Thread Flavio Percoco

On 13/04/17 13:48 +1200, Steve Baker wrote:

On Thu, Apr 13, 2017 at 10:59 AM, Michał Jastrzębski 
wrote:


My dear Kollegues,

Today we had discussion about how to properly name/tag images being
pushed to dockerhub. That moved towards general discussion on revision
mgmt.

Problem we're trying to solve is this:
If you build/push images today, your tag is 4.0
if you do it tomorrow, it's still 4.0, and will keep being 4.0 until
we tag new release.

But image built today is not equal to image built tomorrow, so we
would like something like 4.0.0-1, 4.0.0-2.
While we can reasonably detect history of revisions in dockerhub,
local env will be extremely hard to do.

I'd like to ask you for opinions on desired behavior and how we want
to deal with revision management in general.



I already have a change which proposes tagging images with a pbr built
version [1]. I think if users want tags which are stable for the duration
of a major release they should switch to using the tag specified by
kolla-build.conf base_tag, which can be set to latest, ocata, pike, etc.
This would leave the version tag to at least track changes to the kolla
repo itself. Since the contents of upstream kolla images come from such
diverse sources, all I could suggest to ensure unique tags are created for
unique images is to append a datestamp to [1] (or have an extra datestamp
based tag). Bonus points for only publishing a new datestamp tag if the
contents of the image really changes.

In the RDO openstack-kolla package we now tag images with the
{Version}-{Release} of the openstack-kolla package which built it[2]. I
realise this doesn't solve the problem of the tag needing to change when
other image contents need to be updated, but I believe this can be solved
within the RDO image build pipeline by incrementing the {Release} whenever
a new image needs to be published.

[1] https://review.openstack.org/#/c/448380/
[2] https://review.rdoproject.org/r/#/c/5923/1/openstack-kolla.spec


I like this option better because it's more consistent with how things are done
elsewhere in OpenStack.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] this week's priorities and subteam reports

2017-04-18 Thread Yeleswarapu, Ramamani
Hi,

We are glad to present this week's priorities and subteam report for Ironic. As 
usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. rolling upgrades
1.1. make a change to the grenade job to only upgrade conductor, quick fix 
is supposed to be https://review.openstack.org/456166 (WIP)
1.2. the next patch is ready for reviews: 
https://review.openstack.org/#/c/412397/
2. review next BFV patch:
2.1. next: https://review.openstack.org/#/c/366197/
3. update/review next rescue patches:
3.1. https://review.openstack.org/#/c/350831/
3.2. https://review.openstack.org/#/c/353156/
4. review e-tags spec: https://review.openstack.org/#/c/381991/
5. next driver comp client patch: https://review.openstack.org/#/c/419274/


Bugs (dtantsur, vdrok, TheJulia)

- Stats (diff between 03 Apr 2017 and 10 Apr 2017)
- Ironic: 255 bugs (+4) + 253 wishlist items (+3). 22 new (-4), 205 in progress 
(+3), 0 critical, 28 high (+1) and 32 incomplete (+4)
- Inspector: 15 bugs (0) + 34 wishlist items (+2). 2 new, 21 in progress (+1), 
0 critical, 1 high and 4 incomplete
- Nova bugs with Ironic tag: 10 (-2). 1 new (-1), 0 critical, 0 high

Essential Priorities


CI refactoring and missing test coverage

- Standalone CI tests (vsaienk0)
- next patch to be reviewed: https://review.openstack.org/#/c/429770/
- Missing test coverage (all)
- portgroups and attach/detach tempest tests: 
https://review.openstack.org/382476
- local boot with partition images: TODO 
https://bugs.launchpad.net/ironic/+bug/1531149
- adoption: https://review.openstack.org/#/c/344975/
- should probably be changed to use standalone tests

Generic boot-from-volume (TheJulia, dtantsur)
-
- specs and blueprints:
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/volume-connection-information.html
- code: https://review.openstack.org/#/q/topic:bug/1526231
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/boot-from-volume-reference-drivers.html
- code: https://review.openstack.org/#/q/topic:bug/1559691
- https://blueprints.launchpad.net/nova/+spec/ironic-boot-from-volume
- code: 
https://review.openstack.org/#/q/topic:bp/ironic-boot-from-volume
- status as of most recent weekly meeting:
- TheJulia will be unavailable the week of April 10th and the following 
week.
- joanna has volunteered to run the meeting in TheJulia's absence.
- mjturek is working on getting together devstack config updates/script 
changes in order to support this configuration
- It's looking more like all setup can/should happen during tempest.
- hshiina is looking in Nova side changes and is attempting to obtain 
clarity on some of the issues that tenant network separation introduced into 
the deployment workflow.
- Patch/note tracking etherpad: https://etherpad.openstack.org/p/Ironic-BFV
Ironic Patches:
https://review.openstack.org/#/c/355625/  - Merged - Has Follow-up 
that requires reviews https://review.openstack.org/#/c/453839/
https://review.openstack.org/#/c/366197/
https://review.openstack.org/#/c/406290
https://review.openstack.org/#/c/413324
https://review.openstack.org/#/c/454243/ - WIP logic changes for 
deployment process.  Tenant network separation introduced some additional 
complexity, quick conceptual feedback requested.
https://review.openstack.org/#/c/214586/ - Volume Connection 
Information Rest API Change
Additional patches exist, for python-ironicclient and one for nova.  
Links in the patch/note tracking etherpad.

Rolling upgrades and grenade-partial (rloo, jlvillal)
-
- spec approved; code patches: 
https://review.openstack.org/#/q/topic:bug/1526283
- status as of most recent weekly meeting:
- patches ready for reviews. Next one: 'Add version column' 
(https://review.openstack.org/#/c/412397/)
- Testing work:
- 27-Mar-2017: Grenade multi-node is non-voting
- need to change it to only upgrade the conductor

Reference architecture guide (jroll)

- no updates

Python 3.5 compatibility (JayF, hurricanerix)
-
- no updates

Deploying with Apache and WSGI in CI (vsaienk0)
---
- seems like we can deploy with WSGI, but it still uses a fixed port, instead 
of sub-path
- next one is https://review.openstack.org/#/c/444337/
- inspector is TODO and depends on https://review.openstack.org/435517

Driver composition (dtantsur, jroll)

- spec: 

Re: [openstack-dev] [intel experimental ci] Is it actually checking anything?

2017-04-18 Thread Mooney, Sean K
> -Original Message-
> From: Mikhail Medvedev [mailto:mihail...@gmail.com]
> Sent: Monday, April 17, 2017 10:51 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Cc: openstack-networking-ci 
> Subject: Re: [openstack-dev] [intel experimental ci] Is it actually
> checking anything?
> 
> On Mon, Apr 17, 2017 at 12:31 PM, Jay Pipes  wrote:
> > Please see the below output from the Intel Experimental CI (from
> > https://review.openstack.org/414769):
> >
> > On 04/17/2017 01:28 PM, Intel Experimental CI (Code Review) wrote:
> >>
> >> Intel Experimental CI has posted comments on this change.
> >>
> >> Change subject: placement: SRIOV PF devices as child providers
> >>
> ..
> >>
> >>
> >> Patch Set 17:
> >>
> >> Build succeeded (check pipeline).
> >>
> >> - tempest-dsvm-full-nfv-xenial
> >> http://intel-openstack-ci-logs.ovh/portland/2017-04-
> 17/414769/17/chec
> >> k/tempest-dsvm-full-nfv-xenial/1bcdb64
> >> : FAILURE in 38m 34s (non-voting)
> >> - tempest-dsvm-intel-nfv-xenial
> >> http://intel-openstack-ci-logs.ovh/portland/2017-04-
> 17/414769/17/chec
> >> k/tempest-dsvm-intel-nfv-xenial/a21d879
> >> : FAILURE in 40m 00s (non-voting)
> >> - tempest-dsvm-multinode-ovsdpdk-nfv-networking-xenial
> >> http://intel-openstack-ci-logs.ovh/portland/2017-04-
> 17/414769/17/chec
> >> k/tempest-dsvm-multinode-ovsdpdk-nfv-networking-xenial/837e59d
> >> : FAILURE in 47m 45s (non-voting)
> >
> >
> > As you can see, it says the build succeeded, but all three jobs in
> the
> > pipeline failed.
> 
> This would happen when CI is voting but all the jobs in a check are
> non-voting. Zuul ignores non-voting job result, and as there isn't a
> single voting job, it reports 'build succeeded'. Maybe it should be a
> zuul bug?
[Mooney, Sean K] yes this is the case we have all jobs currently set to non 
voting.
I had the same question in the past to our ci team zuuls comment is slightly 
non-intuitive
But form zuul point of view the job did succeed as it executed all the ci tasks 
and uploaded the
Results. As to what this is running we are moving the intel-nfv-ci out of our 
development lab
into a datacenter. The experimental ci is a duplicate of our normal intel-nfv-ci
and the intent is to swap the names and decommission our ci in our dev lab by 
the end of April
all going well.
> 
> >
> > Is someone actively looking into this particular 3rd party CI system?
> 
> I do not see anything wrong with that CI (apart from misleading comment
> due to zuul issue I mentioned above).
[Mooney, Sean K] it is being maintained by the same people who are maintaining 
the main intel-nfv-ci.
As I said above we will be swapping the main intel-nfv-ci account over to this 
new hardware later this month all going well.
Once we have swapped over the Intel Experimental CI account will be shut down 
once we decommission the infra in our dev lab. 

> 
> >
> > Best,
> > -jay
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ---
> Mikhail Medvedev
> IBM
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] version document for project navigator

2017-04-18 Thread Jimmy McArthur
All, we have modified our ingest tasks to look for this new data. Can we 
get an ETA on when to expect updates from the majority of projects? 
Right now, there isn't too much to test with.


Cheers,
Jimmy


Thierry Carrez 
April 14, 2017 at 3:21 AM

OK approved.

Doug Hellmann 
April 13, 2017 at 11:43 AM

+1

The multi-file format was what the navigator team wanted, and there's
plenty of support for it among other reviewers. Let's move this forward.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Thierry Carrez 
April 13, 2017 at 11:03 AM

Do we really need the TC approval on this ? It's not a formal governance
change or anything.

Whoever has rights on that repo could approve it now and ask for
forgiveness later :)

Monty Taylor 
April 13, 2017 at 10:25 AM
On 04/13/2017 08:28 AM, Jimmy McArthur wrote:

Just checking on the progress of this. :)


Unfortunately a good portion of the TC was away this week at the 
leadership training so getting a final ok on it was a bit stalled. 
It's seeming like the multi-file version is the one most people like 
though, so I'm mostly expecting that to be what we end up with. We 
should be able to get final approval by Tuesday, and then can work on 
getting all of the project info filled in.



Monty Taylor 
April 7, 2017 at 7:05 AM


There is a new repo now:

http://git.openstack.org/cgit/openstack/project-navigator-data

I have pushed up two different patches with two different approaches:

https://review.openstack.org/#/c/454691
https://review.openstack.org/#/c/454688

One is a single file per release. The other is a file per service per
release.

Benefits of the single-file are that it's a single file to pull and
parse.

Benefits of the multi-file approach are that projects can submit
documents for themselves as patches without fear of merge conflicts,
and that the format is actually _identical_ to the format for version
discovery from the API-WG, minus the links section.

I think I prefer the multi-file approach, but would be happy either 
way.


__ 



OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Jimmy McArthur 
April 6, 2017 at 3:51 PM
Cool. Thanks Monty!


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Monty Taylor 
April 6, 2017 at 3:21 PM
On 04/06/2017 11:58 AM, Jimmy McArthur wrote:
Assuming this format is accepted, do you all have any sense of when 
this

data will be complete for all projects?


Hopefully "soon" :)

Honestly, it's not terribly difficult data to produce, so once we're
happy with it and where it goes, crowdsourcing filling it all in
should go quickly.


Jimmy McArthur 
April 5, 2017 at 8:59 AM
FWIW, from my perspective on the Project Navigator side, this format
works great. We can actually derive the age of the project from this
information as well by identifying the first release that has API 
data
for a particular project. I'm indifferent about where it lives, so 
I'd

defer to you all to determine the best spot.

I really appreciate you all putting this together!

Jimmy


__ 



OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Thierry Carrez 
April 5, 2017 at 5:28 AM

Somehow missed this thread, so will repost here comments I made
elsewhere:

This looks good, but I would rather not overload the releases
repository. My personal preference (which was also expressed by
Doug in the TC meeting) would be to set this information up in a
"project-navigator" git repo that we would reuse for any 
information we

need to collect from projects for accurate display on the project
navigator. If the data is not maintained anywhere else (or easily
derivable from existing data), we would use that repository to 
collect

it from projects.

That way there is a clear place to go to to propose fixes to the
project
navigator data. Not knowing how to fix that data is a common 
complaint,
so if we 

Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-04-18 Thread gordon chung


On 18/04/17 08:44 AM, Julien Danjou wrote:

>
> Live upgrade never has been supported in Gnocchi, so I don't see how
> that's a problem. It'd be cool to support it for sure, but we're far
> from having been able to implement it at any point in time in the best.
> So it's not a new issue or anything like that. I really don't see
> a problem with loading the number of sacks at startup.
>

it's a problem if you don't do a full shut down of every single gnocchi 
service. my main concern is this is not a 'lose data' situation like if 
you screw up any of the options. this will corrupt your storage. i'll 
ignore discussion for live upgrade for now to not get sidetracked.


>
> I think it's worth it only if you use replicas – and I don't think 2 is
> enough, I'd try 3 at least, and make it configurable. It'll reduce a lot
> lock-contention (e.g. by 17x time in my previous example).

i could make it same reduction in lock contention if i added basic 
partitioning :P

> As far as I'm concerned, since the number of replicas is configurable,
> you can add a knob that would set replicas=number_of_metricd_worker that
> would implement the current behaviour you implemented – every worker
> tries to grab every sack.

do we want it configurable? tbh, would anyway one configure it or know 
how to configure it? even for us, we're just guessing somewhat.lol i'm 
going to leave it static for now.

>
> We're not leveraging the re-balancing aspect of hashring, that's true.
> We could probably use any dumber system to spread sacks across workers,
> We could stick to the good ol' "len(sacks) / len(workers in the group)".
>
> But I think there's a couple of things down the road that may help us:
> Using the hashring makes sure worker X does not jump from sacks [A, B,
> C], to [W, X, Y, Z] but just to [A, B] or [A, B, C, X]. That should
> minimize lock contention when bringing up/down new workers. I admit it's
> a very marginal win, but… it comes free with it.
> Also, I envision a push based approach in the future (to replace the
> metricd_processing_delay) which will require worker to register to
> sacks. Making sure the rebalancing does not shake everything but is
> rather smooth will also reduce workload around that. Again, it comes
> free.
>

this is not really free. choosing hashring means we will have idle 
workers and more complexity of figuring out what each of the other 
agents look like in group. it's a trade-off (especially considering how 
few keys to nodes we have) which is why i brought up question.

i'll be honest, i'll probably still switch back to hashring... but want 
to make sure we're not just thinking hashring only.

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tacker][OSC] Command naming specs

2017-04-18 Thread Jay Pipes

On 04/17/2017 07:46 PM, Sridhar Ramaswamy wrote:

I like this suggestion. Keyword 'vnf' is the closest to the unit of
things orchestrated by Tacker. Building on that we can have,

openstack vnf create - /create a single VNF based on TOSCA template/
openstack vnf service create - /create a service using a collection of VNFs/
openstack vnf graph create - /create a forwarding graph using a
collection of VNFs/


++ That is concise and easy to understand/read.


It is not an overlap per-se, it is more at a different abstraction
level. The later is a general purpose, lower-level SFC API based on
neutron ports. Former is a higher level YAML (TOSCA) template based
description of a SFC specially geared for NFV use-cases - implemented
using the lower-level networking-sfc APIs. It is analogous to Heat
OS::Nova::Server <-> Nova Compute APIs.


Understood, thanks for the clarification Sridhar!

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Emails for OpenStack R Release Name voting going out - please be patient

2017-04-18 Thread Sean McGinnis
On Tue, Apr 18, 2017 at 06:50:15AM +, Eduardo Gonzalez wrote:
> Hi all,
> is there any new about the mailing issue? Still not received mine.
> 

Same for me - I still have not seen anything.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-04-18 Thread Julien Danjou
On Tue, Apr 18 2017, gordon chung wrote:

> the issue i see is not with how the sacks will be assigned to metricd 
> but how metrics (not daemon) are assigned to sacks. i don't think 
> storing value in storage object solves the issue because when would we 
> load/read it when the api and metricd processes startup? it seems this 
> would require: 1) all services to be shut down and 2) have a completely 
> clean incoming storage path. if any of the two steps aren't done, you 
> have a corrupt incoming storage. if this is a requirement and both of 
> these are done successfully, this means, any kind of 'live upgrade' is 
> impossible in gnocchi.

Live upgrade never has been supported in Gnocchi, so I don't see how
that's a problem. It'd be cool to support it for sure, but we're far
from having been able to implement it at any point in time in the best.
So it's not a new issue or anything like that. I really don't see
a problem with loading the number of sacks at startup.

> i had did test w/ 2 replicas (see: google sheet) and it's still 
> non-uniform but better than without replicas: ~4%-30% vs ~8%-45%. we 
> could also minimise the number lock calls by dividing sacks across 
> workers per agent.
>
> going to play devils advocate now, using hashring in our use case will 
> always hurt throughput (even with perfect distribution since the sack 
> contents themselves are not uniform). returning to original question, is 
> using hashring worth it? i don't think we're even leveraging the 
> re-balancing aspect of hashring.

I think it's worth it only if you use replicas – and I don't think 2 is
enough, I'd try 3 at least, and make it configurable. It'll reduce a lot
lock-contention (e.g. by 17x time in my previous example).
As far as I'm concerned, since the number of replicas is configurable,
you can add a knob that would set replicas=number_of_metricd_worker that
would implement the current behaviour you implemented – every worker
tries to grab every sack.

We're not leveraging the re-balancing aspect of hashring, that's true.
We could probably use any dumber system to spread sacks across workers,
We could stick to the good ol' "len(sacks) / len(workers in the group)".

But I think there's a couple of things down the road that may help us:
Using the hashring makes sure worker X does not jump from sacks [A, B,
C], to [W, X, Y, Z] but just to [A, B] or [A, B, C, X]. That should
minimize lock contention when bringing up/down new workers. I admit it's
a very marginal win, but… it comes free with it.
Also, I envision a push based approach in the future (to replace the
metricd_processing_delay) which will require worker to register to
sacks. Making sure the rebalancing does not shake everything but is
rather smooth will also reduce workload around that. Again, it comes
free.

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-04-18 Thread gordon chung


On 18/04/17 05:21 AM, Julien Danjou wrote:
>> - dynamic sack size
>> making number of sacks dynamic is a concern. previously, we said to have
>> sack size in conf file. the concern is that changing that option
>> incorrectly actually 'corrupts' the db to a state that it cannot recover
>> from. it will have stray unprocessed measures constantly. if we change
>> the db path incorrectly, we don't actually corrupt anything, we just
>> lose data. we've said we don't want sack mappings in indexer so it seems
>> to me, the only safe solution is to make it sack size static and only
>> changeable by hacking?
>
> Not hacking, just we need a proper tool to rebalance it.
> As I already wrote, I think it's good enough to have this documented and
> set to a moderated good value by default (e.g. 4096). There's no need to
> store it in a configuration file, it should be stored in the storage
> driver itself to avoid any mistake, when the storage is initialized via
> `gnocchi-upgrade'.

the issue i see is not with how the sacks will be assigned to metricd 
but how metrics (not daemon) are assigned to sacks. i don't think 
storing value in storage object solves the issue because when would we 
load/read it when the api and metricd processes startup? it seems this 
would require: 1) all services to be shut down and 2) have a completely 
clean incoming storage path. if any of the two steps aren't done, you 
have a corrupt incoming storage. if this is a requirement and both of 
these are done successfully, this means, any kind of 'live upgrade' is 
impossible in gnocchi.

>
>> - sack distribution
>> to distribute sacks across workers, i initially implemented consistent
>> hashing. the issue i noticed is that because hashring is inherently has
>> non-uniform distribution[1], i would have workers sitting idle because
>> it was given less sacks, while other workers were still working.
>>
>> i tried also to implement jump hash[2], which improved distribution and
>> is in theory, less memory intensive as it does not maintain a hash
>> table. while better at distribution, it still is not completely uniform
>> and similarly, the less sacks per worker, the worse the distribution.
>>
>> lastly, i tried just simple locking where each worker is completely
>> unaware of any other worker and handles all sacks. it will lock the sack
>> it is working on, so if another worker tries to work on it, it will just
>> skip. this will effectively cause an additional requirement on locking
>> system (in my case redis) as each worker will make x lock requests where
>> x is number of sacks. so if we have 50 workers and 2048 sacks, it will
>> be 102K requests per cycle. this is in addition to the n number of lock
>> requests per metric (10K-1M metrics?). this does guarantee if a worker
>> is free and there is work to be done, it will do it.
>>
>> i guess the question i have is: by using a non-uniform hash, it seems we
>> gain possibly less load at the expense of efficiency/'speed'. the number
>> of sacks/tasks we have is stable, it won't really change. the number of
>> metricd workers may change but not constantly. lastly, the number of
>> sacks per worker will always be relatively low (10:1, 100:1 assuming max
>> number of sacks is 2048). given these conditions, do we need
>> consistent/jump hashing? is it better to just modulo sacks and ensure
>> 'uniform' distribution and allow for 'larger' set of buckets to be
>> reshuffled when workers are added?
>
> What about using the hashring with replicas (e.g. 3 by default) and a
> lock per sack? This should reduce largely the number of lock try that
> you see. If you have 2k sacks divided across 50 workers and each one has
> a replica, that make each process care about 122 metrics so they might
> send 122 acquire() try each, which is 50 × 122 = 6100 acquire request,
> 17 times less than 102k.
> This also solve the problem of non-uniform distribution, as having
> replicas make sure every node gets some work.

i had did test w/ 2 replicas (see: google sheet) and it's still 
non-uniform but better than without replicas: ~4%-30% vs ~8%-45%. we 
could also minimise the number lock calls by dividing sacks across 
workers per agent.

going to play devils advocate now, using hashring in our use case will 
always hurt throughput (even with perfect distribution since the sack 
contents themselves are not uniform). returning to original question, is 
using hashring worth it? i don't think we're even leveraging the 
re-balancing aspect of hashring.

>
> You can then probably remove the per-metric-lock too: this is just used
> when processing new measures (here the sack lock is enough) and when
> expunging metrics. You can safely use the same lock sack-lock for
> expunging metric. We may just need to it out from janitor? Something to
> think about!
>

good point, we may not need to lock sack for expunging at all, since 
it's already marked as deleted in indexer so it is effectively not 
accessible.


-- 
gord

[openstack-dev] [neutron] [classifier] CCF Meeting

2017-04-18 Thread Duarte Cardoso, Igor
Hi all,

Reminding that the Common Classification Framework meeting is today 1400 UTC @ 
#openstack-meeting.

Agenda:
https://wiki.openstack.org/wiki/Neutron/CommonClassificationFramework#Discussion_Topic_18_April_2017
Feel free to add more topics.

Best regards,
Igor.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [third-party-ci] pkvmci ironic job breakage details

2017-04-18 Thread Vladyslav Drok
Hey Michael,

On Fri, Apr 14, 2017 at 6:51 PM, Michael Turek 
wrote:

> Hey ironic-ers,
>
> So our third party CI job for ironic has been, and remains, broken. I was
> able to do some investigation today and here's a summary of what we're
> seeing. I'm hoping someone might know the root of the problem.
>
> For reference, please see this paste and the logs of the job that I was
> working in:
> http://paste.openstack.org/show/606564/
> https://dal05.objectstorage.softlayer.net/v1/AUTH_3d8e6ecb-
> f597-448c-8ec2-164e9f710dd6/pkvmci/ironic/25/454625/10/
> check-ironic/tempest-dsvm-ironic-agent_ipmitool/0520958/
>
> I've redacted the credentials in the ironic node-show for obvious reasons
> but rest assured they are properly set. These commands are run while
> '/opt/stack/new/ironic/devstack/lib/ironic:wait_for_nova_resources' is
> looping.
>
> Basically, the ironic hypervisor for the node doesn't appear. As well,
> none of the node's properties make it to the hypervisor stats.
>
> Some more strangeness is that the 'count' value from the 'openstack
> hypervisor stats show'. Though no hypervisors appear, the count is still 1.
> Since the run was broken, I decided to delete node-0 (about 3-5 minutes
> before the run failed) and see if it updated the count. It did.
>
> Does anyone have any clue what might be happening here? Any advice would
> be appreciated!
>

So the failure seems to be here --
https://dal05.objectstorage.softlayer.net/v1/AUTH_3d8e6ecb-f597-448c-8ec2-164e9f710dd6/pkvmci/ironic/25/454625/10/check-ironic/tempest-dsvm-ironic-agent_ipmitool/0520958/screen-ir-api.txt.gz,
API and conductor are not able to communicate via RPC for some reason. Need
to investigate this more. Do you mind filing a bug about this?


>
> Thanks,
> mjturek
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cloudkitty] IRC Meeting

2017-04-18 Thread Christophe Sauthier

Dear OpenStackers

Since the last IRC meeting related to Cloudkitty has been postpone, we 
have planned the next meeting on Tuesday 25th April 12:30 UTC


Like usually this IRC meeting will be on #cloudkitty.

I hope to see many of you there !

All the best

 Christophe


Christophe Sauthier   Mail : 
christophe.sauth...@objectif-libre.com

CEO   Mob : +33 (0) 6 16 98 63 96
Objectif LibreURL : www.objectif-libre.com
Au service de votre Cloud Twitter : @objectiflibre

Suivez les actualités OpenStack en français en vous abonnant à la Pause 
OpenStack

http://olib.re/pause-openstack

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-04-18 Thread Attila Fazekas
On Tue, Apr 18, 2017 at 11:04 AM, Arx Cruz  wrote:

>
>
> On Tue, Apr 18, 2017 at 10:42 AM, Steven Hardy  wrote:
>
>> On Mon, Apr 17, 2017 at 12:48:32PM -0400, Justin Kilpatrick wrote:
>> > On Mon, Apr 17, 2017 at 12:28 PM, Ben Nemec 
>> wrote:
>> > > Tempest isn't really either of those things.  According to another
>> message
>> > > in this thread it takes around 15 minutes to run just the smoke tests.
>> > > That's unacceptable for a lot of our CI jobs.
>> >
>>
>
> I rather spend 15 minutes running tempest than add a regression or a new
> bug, which already happen in the past.
>
> The smoke tests might not be the best test selection anyway, you should
pick some scenario which does
for example snapshot of images and volumes. yes, these are the slow ones,
but they can run in parallel.

Very likely you do not really want to run all tempest test, but 10~20
minute time,
sounds reasonable for a sanity test.

The tempest config utility also should be extended by some parallel
capability,
and should be able to use already downloaded (part of the image) resources.

Tempest/testr/subunit worker balance is not always the best,
technically would be possible to do dynamic balancing, but it would require
a lot of work.
Let me know when it becomes the main concern, I can check what can/cannot
be done.



>
>> > Ben, is the issue merely the time it takes? Is it the affect that time
>> > taken has on hardware availability?
>>
>> It's both, but the main constraint is the infra job timeout, which is
>> about
>> 2.5hrs - if you look at our current jobs many regularly get close to (and
>> sometimes exceed this), so we just don't have the time budget available to
>> run exhasutive tests every commit.
>>
>
> We have green light from infra to increase the job timeout to 5 hours, we
> do that in our periodic full tempest job.
>

Sounds good, but I am afraid it could hurt more than helping, it could
delay other things get fixed by lot
especially if we got some extra flakiness, because of foobar.

You cannot have all possible tripleo configs on the gate anyway,
so something will pass which will require a quick fix.

IMHO the only real solution, is making the before test-run steps faster or
shorter.

Do you have any option to start the tempest running jobs in a more
developed state ?
I mean, having more things already done at the start time
(images/snapshot)
and just do a fast upgrade at the beginning of the job.

Openstack installation can be completed in a `fast` way (~minute) on
RHEL/Fedora systems
after the yum steps, also if you are able to aggregate all yum step to
single
command execution (transaction) you generally able to save a lot of time.

There is plenty of things what can be made more efficient before the test
run,
when you start considering everything evil which can be accounted for more
than 30 sec
of time, this can happen soon.

For example just executing the cpython interpreter for the openstack
commands is above 30 sec,
the work what they are doing can be done in much much faster way.

Lot of install steps actually does not depends on each other,
it allows more things to be done in parallel, we generally can have more
core than Ghz.



>
>>
>> > Should we focus on how much testing we can get into N time period?
>> > Then how do we decide an optimal N
>> > for our constraints?
>>
>> Well yeah, but that's pretty much how/why we ended up with pingtest, it's
>> simple, fast, and provides an efficient way to do smoke tests, e.g
>> creating
>> just one heat resource is enough to prove multiple OpenStack services are
>> running, as well as the DB/RPC etc etc.
>>
>> > I've been working on a full up functional test for OpenStack CI builds
>> > for a long time now, it works but takes
>> > more than 10 hours. IF you're interested in results kick through to
>> > Kibana here [0]. Let me know off list if you
>> > have any issues, the presentation of this data is all experimental
>> still.
>>
>> This kind of thing is great, and I'd support more exhaustive testing via
>> periodic jobs etc, but the reality is we need to focus on "bang for buck"
>> e.g the deepest possible coverage in the most minimal amount of time for
>> our per-commit tests - we rely on the project gates to provide a full API
>> surface test, and we need to focus on more basic things like "did the
>> service
>> start", and "is the API accessible".  Simple crud operations on a subset
>> of
>> the API's is totally fine for this IMO, whether via pingtest or some other
>> means.
>>
>>
> Right now we do have a periodic job running full tempest, with a few
> skips, and because of the lack of tempest tests in the patches, it's being
> pretty hard to keep it stable enough to have a 100% pass, and of course,
> also the installation very often fails (like in the last five days).
> For example, [1] is the latest run we have in periodic job that we get
> results from tempest, and we have 114 failures that was 

Re: [openstack-dev] [devstack] uwsgi for API services - RSN

2017-04-18 Thread Sean Dague
Ok, the patch series has come together now, and
https://review.openstack.org/#/c/456344/ remains the critical patch.

This introduces a new global config option: "WSGI_MODE", which will be
either "uwsgi" or "mod_wsgi" (for the transition).

https://review.openstack.org/#/c/456717/6/lib/placement shows what it
takes to make something that's current running under mod_wsgi to run
under uwsgi in this model.

The intent is that uwsgi mode becomes primary RSN, as that provides a
more consistent experience for development, and still exercises the API
services as real wsgi applications.

-Sean

On 04/13/2017 08:01 AM, Sean Dague wrote:
> One of the many reasons for getting all our API services running wsgi
> under a real webserver is to get out of the custom ports for all
> services game. However, because of some of the limits of apache
> mod_wsgi, we really haven't been able to do that in our development
> enviroment. Plus, the moment we go to mod_wsgi for some services, the
> entire development workflow for "change this library, refresh the
> following services" changes dramatically.
> 
> It would be better to have a consistent story here.
> 
> So there is some early work up to use apache mod_proxy_uwsgi as the
> listener, and uwsgi processes running under systemd for all the
> services. These bind only to a unix local socket, not to a port.
> https://review.openstack.org/#/c/456344/
> 
> Early testing locally has been showing progress. We still need to prove
> out a few things, but this should simplify a bunch of the setup. And
> coming with systemd will converge us back to a more consistent
> development workflow when updating common code in a project that has
> both API services and workers.
> 
> For projects that did the mod_wsgi thing in a devstack plugin, this is
> going to require some adjustment. Exactly what is not yet clear, but
> it's going to be worth following that patch.
> 
>   -Sean
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] Videoconf meeting for Actor refactor proposal

2017-04-18 Thread Antoni Segura Puimedon
Hi Kuryrs,

Tomorrow there's going to be a meeting to discuss Ilya's proposal for
a refactor of Kuryr controller entities based on the actor model. The
meeting will be held at:

https://bluejeans.com/826701641

12:00 UTC

I'll post the recording probably the day after.

See you there,

Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-04-18 Thread Luigi Toscano
On Monday, 17 April 2017 18:28:24 CEST Ben Nemec wrote:
> On 04/17/2017 10:51 AM, Emilien Macchi wrote:
> > We haven't got much feedback from TripleO core reviewers, who are
> > usually more involved on this topic. I'll give a chance to let them
> > talk because we take some actions based on the feedback brought in
> > this discussion.
> 
> I started to write a response last week and realized I didn't have a
> coherent recommendation, but here are my semi-organized thoughts:
> 
> The pingtest was created for two main reasons.  First, it's fast.  Less
> than three minutes in most CI jobs.  Second, it's simple.  We've added a
> bunch of stuff for resource cleanup and such, but in essence it's four
> commands: glance image-create, neutron net-create, neutron
> subnet-create, and heat stack-create.  It would be hard to come up with
> a useful test that is meaningfully simpler.
> 
> Tempest isn't really either of those things.  According to another
> message in this thread it takes around 15 minutes to run just the smoke
> tests.  That's unacceptable for a lot of our CI jobs.  It also tends to
> require a lot more configuration in my experience.

I think that you are talking about the "full set of Tempest tests here". My 
point it is possible to have a test which has the exact same semantic as the 
current ping test, but written using tempest.lib and relying on the other 
tooling from Tempest to run it (tempest run/ostestr).

With that in place, it would trivial to decide whether to run just that test 
or run other tests in other jobs (it would be a matter of a simple regexp) and 
it would simplify the code in other tools like tripleo-quickstart (no need to 
keep the preparation phase for two runners: validation and tempest).

Ciao
-- 
Luigi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sfc][fwaas][taas][horizon] where would we like to have horizon dashboard for neutron stadium projects?

2017-04-18 Thread Akihiro Motoki
Thanks for your feedback, all!
It seems we have a consensus and the route is "(a) dashboard
repository per project".
I would suggest -dashboard as a repository name where  is your
main repo name.


2017-04-11 0:09 GMT+09:00 Akihiro Motoki :
> Hi neutrinos (and horizoners),
>
> As the title says, where would we like to have horizon dashboard for
> neutron stadium projects?
> There are several projects under neutron stadium and they are trying
> to add dashboard support.
>
> I would like to raise this topic again. No dashboard support lands since then.
> Also Horizon team would like to move in-tree neutron stadium dashboard
> (VPNaaS and FWaaS v1 dashboard) to outside of horizon repo.
>
> Possible approaches
> 
>
> Several possible options in my mind:
> (a) dashboard repository per project
> (b) dashboard code in individual project
> (c) a single dashboard repository for all neutron stadium projects
>
> Which one sounds better?
>
> Pros and Cons
> 
>
> (a) dashboard repository per project
>   example, networking-sfc-dashboard repository for networking-sfc
>   Pros
>- Can use existing horizon related project convention and knowledge
>  (directory structure, testing, translation support)
>- Not related to the neutron stadium inclusion. Each project can
> provide its dashboard
>  support regardless of neutron stadium inclusion.
>  Cons
>- An additional repository is needed.
>
> (b) dashboard code in individual project
>   example, dashboard module for networking-sfc
>   Pros:
>- No additional repository
>- Not related to the neutron stadium inclusion. Each project can
> provide its dashboard
>  support regardless of neutron stadium inclusion.
>  Cons:
>- Requires extra efforts to support neutron and horizon codes in a
> single repository
>  for testing and translation supports. Each project needs to
> explore the way.
>
> (c) a single dashboard repository for all neutron stadium projects
>(something like neutron-advanced-dashboard)
>   Pros:
> - No additional repository per project
>   Each project do not need a basic setup for dashboard and
> possible makes things simple.
>   Cons:
> - Inclusion criteria depending on the neutron stadium inclusion/exclusion
>   (Similar discussion happens as for neutronclient OSC plugin)
>   Project before neutron stadium inclusion may need another 
> implementation.
>
>
> My vote is (a) or (c) (to avoid mixing neutron and dashboard codes in a repo).
>
> Note that as dashboard supports for feature in the main neutron repository
> are implemented in the horizon repository as we discussed several months ago.
> As an example, trunk support is being development in the horizon repo.
>
> Thanks,
> Akihiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-04-18 Thread Andrea Frittoli
On Tue, Apr 18, 2017 at 10:07 AM Arx Cruz  wrote:

> On Tue, Apr 18, 2017 at 10:42 AM, Steven Hardy  wrote:
>
>> On Mon, Apr 17, 2017 at 12:48:32PM -0400, Justin Kilpatrick wrote:
>> > On Mon, Apr 17, 2017 at 12:28 PM, Ben Nemec 
>> wrote:
>> > > Tempest isn't really either of those things.  According to another
>> message
>> > > in this thread it takes around 15 minutes to run just the smoke tests.
>>
>

The smoke filter may take 15min to run, but it is possible to curate a
TripleO own
filter which could take less. It really depends in which area you see
regressions mostly,
but I would assume the most important bit for you is that services are
running and
working fine with each other, so and a couple of scenario tests may be
enough to prove that.

If there is no existing test that works for you in Tempest, you might even
write your
own Tempest plugin - that is as long as you are satisfied with API driven
checks.

andreaf


> > > That's unacceptable for a lot of our CI jobs.
>> >
>>
>
> I rather spend 15 minutes running tempest than add a regression or a new
> bug, which already happen in the past.
>

>
>> > Ben, is the issue merely the time it takes? Is it the affect that time
>> > taken has on hardware availability?
>>
>> It's both, but the main constraint is the infra job timeout, which is
>> about
>> 2.5hrs - if you look at our current jobs many regularly get close to (and
>> sometimes exceed this), so we just don't have the time budget available to
>> run exhasutive tests every commit.
>>
>
> We have green light from infra to increase the job timeout to 5 hours, we
> do that in our periodic full tempest job.
>
>
>>
>> > Should we focus on how much testing we can get into N time period?
>> > Then how do we decide an optimal N
>> > for our constraints?
>>
>> Well yeah, but that's pretty much how/why we ended up with pingtest, it's
>> simple, fast, and provides an efficient way to do smoke tests, e.g
>> creating
>> just one heat resource is enough to prove multiple OpenStack services are
>> running, as well as the DB/RPC etc etc.
>>
>> > I've been working on a full up functional test for OpenStack CI builds
>> > for a long time now, it works but takes
>> > more than 10 hours. IF you're interested in results kick through to
>> > Kibana here [0]. Let me know off list if you
>> > have any issues, the presentation of this data is all experimental
>> still.
>>
>> This kind of thing is great, and I'd support more exhaustive testing via
>> periodic jobs etc, but the reality is we need to focus on "bang for buck"
>> e.g the deepest possible coverage in the most minimal amount of time for
>> our per-commit tests - we rely on the project gates to provide a full API
>> surface test, and we need to focus on more basic things like "did the
>> service
>> start", and "is the API accessible".  Simple crud operations on a subset
>> of
>> the API's is totally fine for this IMO, whether via pingtest or some other
>> means.
>>
>>
> Right now we do have a periodic job running full tempest, with a few
> skips, and because of the lack of tempest tests in the patches, it's being
> pretty hard to keep it stable enough to have a 100% pass, and of course,
> also the installation very often fails (like in the last five days).
> For example, [1] is the latest run we have in periodic job that we get
> results from tempest, and we have 114 failures that was caused by some new
> code/change, and I have no idea which one was, just looking at the
> failures, I can notice that smoke tests plus minimum basic scenario tests
> would catch these failures and the developer could fix it and make me happy
> :)
> Now I have to spend several hours installing and debugging each one of
> those tests to identify where/why it fails.
> Before this run, we got 100% pass, but unfortunately I don't have the
> results anymore, it was removed already from logs.openstack.org
>
>
>
>> Steve
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> [1]
> http://logs.openstack.org/periodic/periodic-tripleo-ci-centos-7-ovb-nonha-tempest-oooq/0072651/logs/oooq/stackviz/#/stdin
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-04-18 Thread Julien Danjou
On Mon, Apr 17 2017, gordon chung wrote:

Hi Gordon,

> i've started to implement multiple buckets and the initial tests look 
> promising. here's some things i've done:
>
> - dropped the scheduler process and allow processing workers to figure 
> out tasks themselves
> - each sack is now handled fully (not counting anything added after 
> processing worker)
> - number of sacks are static
>
> after the above, i've been testing it and it works pretty well, i'm able 
> to process 40K metrics, 60 points each, in 8-10mins with 54 workers when 
> it took significantly longer before.

Great!

> the issues i've run into:
>
> - dynamic sack size
> making number of sacks dynamic is a concern. previously, we said to have 
> sack size in conf file. the concern is that changing that option 
> incorrectly actually 'corrupts' the db to a state that it cannot recover 
> from. it will have stray unprocessed measures constantly. if we change 
> the db path incorrectly, we don't actually corrupt anything, we just 
> lose data. we've said we don't want sack mappings in indexer so it seems 
> to me, the only safe solution is to make it sack size static and only 
> changeable by hacking?

Not hacking, just we need a proper tool to rebalance it.
As I already wrote, I think it's good enough to have this documented and
set to a moderated good value by default (e.g. 4096). There's no need to
store it in a configuration file, it should be stored in the storage
driver itself to avoid any mistake, when the storage is initialized via
`gnocchi-upgrade'.

> - sack distribution
> to distribute sacks across workers, i initially implemented consistent 
> hashing. the issue i noticed is that because hashring is inherently has 
> non-uniform distribution[1], i would have workers sitting idle because 
> it was given less sacks, while other workers were still working.
>
> i tried also to implement jump hash[2], which improved distribution and 
> is in theory, less memory intensive as it does not maintain a hash 
> table. while better at distribution, it still is not completely uniform 
> and similarly, the less sacks per worker, the worse the distribution.
>
> lastly, i tried just simple locking where each worker is completely 
> unaware of any other worker and handles all sacks. it will lock the sack 
> it is working on, so if another worker tries to work on it, it will just 
> skip. this will effectively cause an additional requirement on locking 
> system (in my case redis) as each worker will make x lock requests where 
> x is number of sacks. so if we have 50 workers and 2048 sacks, it will 
> be 102K requests per cycle. this is in addition to the n number of lock 
> requests per metric (10K-1M metrics?). this does guarantee if a worker 
> is free and there is work to be done, it will do it.
>
> i guess the question i have is: by using a non-uniform hash, it seems we 
> gain possibly less load at the expense of efficiency/'speed'. the number 
> of sacks/tasks we have is stable, it won't really change. the number of 
> metricd workers may change but not constantly. lastly, the number of 
> sacks per worker will always be relatively low (10:1, 100:1 assuming max 
> number of sacks is 2048). given these conditions, do we need 
> consistent/jump hashing? is it better to just modulo sacks and ensure 
> 'uniform' distribution and allow for 'larger' set of buckets to be 
> reshuffled when workers are added?

What about using the hashring with replicas (e.g. 3 by default) and a
lock per sack? This should reduce largely the number of lock try that
you see. If you have 2k sacks divided across 50 workers and each one has
a replica, that make each process care about 122 metrics so they might
send 122 acquire() try each, which is 50 × 122 = 6100 acquire request,
17 times less than 102k.
This also solve the problem of non-uniform distribution, as having
replicas make sure every node gets some work.

You can then probably remove the per-metric-lock too: this is just used
when processing new measures (here the sack lock is enough) and when
expunging metrics. You can safely use the same lock sack-lock for
expunging metric. We may just need to it out from janitor? Something to
think about!

Cheers,
-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-04-18 Thread Arx Cruz
On Tue, Apr 18, 2017 at 10:42 AM, Steven Hardy  wrote:

> On Mon, Apr 17, 2017 at 12:48:32PM -0400, Justin Kilpatrick wrote:
> > On Mon, Apr 17, 2017 at 12:28 PM, Ben Nemec 
> wrote:
> > > Tempest isn't really either of those things.  According to another
> message
> > > in this thread it takes around 15 minutes to run just the smoke tests.
> > > That's unacceptable for a lot of our CI jobs.
> >
>

I rather spend 15 minutes running tempest than add a regression or a new
bug, which already happen in the past.


> > Ben, is the issue merely the time it takes? Is it the affect that time
> > taken has on hardware availability?
>
> It's both, but the main constraint is the infra job timeout, which is about
> 2.5hrs - if you look at our current jobs many regularly get close to (and
> sometimes exceed this), so we just don't have the time budget available to
> run exhasutive tests every commit.
>

We have green light from infra to increase the job timeout to 5 hours, we
do that in our periodic full tempest job.


>
> > Should we focus on how much testing we can get into N time period?
> > Then how do we decide an optimal N
> > for our constraints?
>
> Well yeah, but that's pretty much how/why we ended up with pingtest, it's
> simple, fast, and provides an efficient way to do smoke tests, e.g creating
> just one heat resource is enough to prove multiple OpenStack services are
> running, as well as the DB/RPC etc etc.
>
> > I've been working on a full up functional test for OpenStack CI builds
> > for a long time now, it works but takes
> > more than 10 hours. IF you're interested in results kick through to
> > Kibana here [0]. Let me know off list if you
> > have any issues, the presentation of this data is all experimental still.
>
> This kind of thing is great, and I'd support more exhaustive testing via
> periodic jobs etc, but the reality is we need to focus on "bang for buck"
> e.g the deepest possible coverage in the most minimal amount of time for
> our per-commit tests - we rely on the project gates to provide a full API
> surface test, and we need to focus on more basic things like "did the
> service
> start", and "is the API accessible".  Simple crud operations on a subset of
> the API's is totally fine for this IMO, whether via pingtest or some other
> means.
>
>
Right now we do have a periodic job running full tempest, with a few skips,
and because of the lack of tempest tests in the patches, it's being pretty
hard to keep it stable enough to have a 100% pass, and of course, also the
installation very often fails (like in the last five days).
For example, [1] is the latest run we have in periodic job that we get
results from tempest, and we have 114 failures that was caused by some new
code/change, and I have no idea which one was, just looking at the
failures, I can notice that smoke tests plus minimum basic scenario tests
would catch these failures and the developer could fix it and make me happy
:)
Now I have to spend several hours installing and debugging each one of
those tests to identify where/why it fails.
Before this run, we got 100% pass, but unfortunately I don't have the
results anymore, it was removed already from logs.openstack.org



> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

[1]
http://logs.openstack.org/periodic/periodic-tripleo-ci-centos-7-ovb-nonha-tempest-oooq/0072651/logs/oooq/stackviz/#/stdin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-04-18 Thread Steven Hardy
On Mon, Apr 17, 2017 at 12:48:32PM -0400, Justin Kilpatrick wrote:
> On Mon, Apr 17, 2017 at 12:28 PM, Ben Nemec  wrote:
> > Tempest isn't really either of those things.  According to another message
> > in this thread it takes around 15 minutes to run just the smoke tests.
> > That's unacceptable for a lot of our CI jobs.
> 
> Ben, is the issue merely the time it takes? Is it the affect that time
> taken has on hardware availability?

It's both, but the main constraint is the infra job timeout, which is about
2.5hrs - if you look at our current jobs many regularly get close to (and
sometimes exceed this), so we just don't have the time budget available to
run exhasutive tests every commit.

> Should we focus on how much testing we can get into N time period?
> Then how do we decide an optimal N
> for our constraints?

Well yeah, but that's pretty much how/why we ended up with pingtest, it's
simple, fast, and provides an efficient way to do smoke tests, e.g creating
just one heat resource is enough to prove multiple OpenStack services are
running, as well as the DB/RPC etc etc.

> I've been working on a full up functional test for OpenStack CI builds
> for a long time now, it works but takes
> more than 10 hours. IF you're interested in results kick through to
> Kibana here [0]. Let me know off list if you
> have any issues, the presentation of this data is all experimental still.

This kind of thing is great, and I'd support more exhaustive testing via
periodic jobs etc, but the reality is we need to focus on "bang for buck"
e.g the deepest possible coverage in the most minimal amount of time for
our per-commit tests - we rely on the project gates to provide a full API
surface test, and we need to focus on more basic things like "did the service
start", and "is the API accessible".  Simple crud operations on a subset of
the API's is totally fine for this IMO, whether via pingtest or some other
means.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] blog post about being PTL in OpenStack

2017-04-18 Thread Steven Hardy
On Thu, Apr 13, 2017 at 08:22:50PM -0400, Emilien Macchi wrote:
> Exceptionally, I'll self-promote a blog post that I wrote about my
> personal experience of being PTL in OpenStack.
> 
> http://my1.fr/blog/my-journey-as-an-openstack-ptl/

Thanks for writing this up Emilien - it aligns pretty well with my previous
experiences as both Heat and TripleO PTL, particularly the part about
dealing with interuptions and requests for help.

I agree the more autonomous squads approach has helped in that regard as we
have to acknowledge that individuals and their time only scales so far.

> My hope is to engage some discussion about what our community thinks
> about this role and how we could bring more leaders in OpenStack.
> This blog post also explains why I won't run for PTL during the next cycle.

Thanks for your great work as TripleO PTL over the last two cycles, and in
particular for your constant focus on improving CI coverage and team-building.

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Florian Fuchs for tripleo-validations core

2017-04-18 Thread Steven Hardy
On Thu, Apr 06, 2017 at 11:53:04AM +0200, Martin André wrote:
> Hellooo,
> 
> I'd like to propose we extend Florian Fuchs +2 powers to the
> tripleo-validations project. Florian is already core on tripleo-ui
> (well, tripleo technically so this means there is no changes to make
> to gerrit groups).
> 
> Florian took over many of the stalled patches in tripleo-validations
> and is now the principal contributor in the project [1]. He has built
> a good expertise over the last months and I think it's time he has
> officially the right to approve changes in tripleo-validations.
> 
> Consider this my +1 vote.

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Emails for OpenStack R Release Name voting going out - please be patient

2017-04-18 Thread Eduardo Gonzalez
Hi all,
is there any new about the mailing issue? Still not received mine.

Regards

On Thu, Apr 13, 2017, 1:51 AM Sean McGinnis  wrote:

> On Wed, Apr 12, 2017 at 05:27:13PM -0500, Jay S Bryant wrote:
> > I also didn't receive an e-mail.  :-(
> >
> I don't think we need to perpetuate this thread for everybody
> that didn't get it. I think it's safe to see that a large number
> of folks have not recieved the email. (Me included) :)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Florian Fuchs for tripleo-validations core

2017-04-18 Thread Jiri Tomasek

+1!


On 6.4.2017 11:53, Martin André wrote:

Hellooo,

I'd like to propose we extend Florian Fuchs +2 powers to the
tripleo-validations project. Florian is already core on tripleo-ui
(well, tripleo technically so this means there is no changes to make
to gerrit groups).

Florian took over many of the stalled patches in tripleo-validations
and is now the principal contributor in the project [1]. He has built
a good expertise over the last months and I think it's time he has
officially the right to approve changes in tripleo-validations.

Consider this my +1 vote.

Martin

[1] 
http://stackalytics.com/?module=tripleo-validations=patches=pike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Buildroot deploy image POC ready for testing

2017-04-18 Thread Chris Smart
Hi all,

Over the last couple of months I have been working on creating a
smaller, more flexible deploy image for Ironic. It is now ready for
testing, if anyone is interested.

For this proof of concept I chose Buildroot [1], a well regarded, simple
to use tool for building embedded Linux images. The builds are done as a
regular non-root user and it has a menuconfig system (like the Linux
kernel) for customising the build. It supports caching of downloads and
ccache for builds.

With the current default configuration:

* Linux kernel is ~2MB
* Compressed initramfs image is ~25MB
* Bootable ISO image ~26MB
* Passing the OpenStack Ironic gating tests [2]
* Highly customisable (thanks to Buildroot)

All of the source code for building the image is up on my GitHub account
in the ipa-buildroot repository. [3]

I have also written up documentation which should walk you through the
whole build and customisation process. [4]

The ipa-buildroot repository contains the IPA specific Buildroot
configurations and tracks upstream Buildroot in a Git submodule
(actually I'm carrying one patch on top of Buildroot for now, but the
goal is to use pristine upstream Buildroot).

*In addition to this* I also have patches for ironic-python-agent on my
GitHub account that adds support for Buildroot to imagebuild. [5]

This will handle the dependencies, customisation and build of the
Buildroot Ironic image using make (like we do with tinyipa and coreos).

Using the ironic-python-agent repo is as simple as:

$ git clone https://github.com/csmart/ironic-python-agent.git
$ cd ironic-python-agent/imagebuild/buildroot
$ make help
$ # or
$ ./build-buildroot.sh --help

Buildroot will compile the kernel and initramfs, then post build scripts
clone the Ironic Python Agent repository (which defaults to upstream but
you can specify the repo and commit) and creates Python wheels to
install into the target.

Customising the build is pretty easy, too:

$ make menuconfig
$ # do buildroot changes
$ make

I created the kernel config from scratch (using tinyconfig) and
deliberately tried to balance size and functionality. It should boot on
most Intel based machines (BIOS and UEFI), however hardware support like
hard disk and ethernet controllers is deliberately limited (see kernel
config [6]). The goal was to start small and add more support as needed.

Customising the Linux kernel is also pretty easy, though:

$ make linux-menuconfig
$ # do kernel changes
$ make

Each time you run make, it'll pick up where you left off and re-create
your images.

Really happy for anyone to test it out and answer any questions you
have.

Many thanks!
Chris

[1] https://buildroot.uclibc.org/
[2] https://review.openstack.org/#/c/445763/
[3] https://github.com/csmart/ipa-buildroot
[4]
https://github.com/csmart/ipa-buildroot#openstack-ironic-python-agent
[5]
https://github.com/csmart/ironic-python-agent/tree/buildroot/imagebuild/buildroot
[6]
https://github.com/csmart/ipa-buildroot/blob/master/buildroot-ipa/board/openstack/ipa/linux-4.10.6.config

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][Mistral][Murano] Glare client was added to global requirements

2017-04-18 Thread Ренат Ахмеров
Awesome, thanks Mike!

Looking forward to implementing Mistral/Glare integration.

Renat Akhmerov
@Nokia

On 17 Apr 2017, 23:06 +0700, Mikhail Fedosin , wrote:
> Hello!
>
> I'm happy to announce that recently glare client was added to openstack's 
> global requirements. Current version (0.3) contains 'glareclient' library, 
> plugin to openstack client - 'openstack artifact', and a standalone cli - 
> 'glare'.
>
> Now I'm finishing writing the documentation about cli commands and I hope it 
> will be published this week. It means that soon projects like murano and 
> mistral may begin the experimental integration with glare v1 api.
>
> If you have any questions - feel free to ask them in our irc channel 
> #openstack-glare
>
> Best regards,
> Mikhail Fedosin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] New CI Job definitions

2017-04-18 Thread Ренат Ахмеров
Thanks Brad!

So kombu gate is now non-apache, right?

Thanks

Renat Akhmerov
@Nokia

On 17 Apr 2017, 22:17 +0700, Brad P. Crochet , wrote:
> Hi y'all...
>
> In the midst of trying to track down some errors being seen with tempest 
> tests whilst running under mod_wsgi/apache, I've made it so that the devstack 
> plugin is capable of also running with mod_wsgi/apache.[1]
>
> In doing so, It's become the default devstack job. I've also now created some 
> 'non-apache' jobs that basically are what the old jobs did, just renamed.
>
> Also, I've consolidated the job definitions (the original and the kombu) to 
> simplify and DRY out the jobs. You can see the infra review here.[2]
>
> The list of jobs will be:
> gate-mistral-devstack-dsvm-ubuntu-xenial-nv
> gate-mistral-devstack-dsvm-non-apache-ubuntu-xenial-nv
> gate-mistral-devstack-dsvm-kombu-ubuntu-xenial-nv
>
> Note that the trusty jobs have been eliminated.
>
> Essentially, I've added a '{special}' tag to the job definition, allowing us 
> to create special-cased devstack jobs. So, as you can see, I've migrated the 
> kombu job to be such a thing. It should also be possible to combine them.
>
> [1] https://review.openstack.org/#/c/454710/
> [2] https://review.openstack.org/#/c/457106/
> --
> Brad P. Crochet, RHCA, RHCE, RHCVA, RHCDS
> Principal Software Engineer
> (c) 704.236.9385
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev