[openstack-dev] [Networking-vSphere] whether support Ocata ?

2017-03-03 Thread Shake Chen
Hi  Networking-vSphere Team

Networking-vSphere project whether support Ocata. I check the github, only
have tag Mitaka? what about the Ocata?



-- 
Shake Chen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][tripleo-quickstart] Multiple parallel deployments on a single virthost

2017-03-03 Thread Lars Kellogg-Stedman
I've just submitted a slew of changes to tripleo-quickstart with the
ultimate goal of being able to spin up multiple openstack deployments
in parallel on the same target virthost.

The meat of the change is an attempt to clearly separate the virthost
from the undercloud; we had several tasks that worked only because the
user name (and working directory) happened to be the same in both
environments.

With these changes in place, I am able to rapidly deploy multiple
tripleo-deployments on a single virthost, each one isolated to a
particular user account.  I'm using a playbook that includes
just the libvirt/setup, undercloud-deploy, and overlcoud-* roles.
This is extremely convenient for some of the work that I'm doing now.

This does require some pre-configuration on the virthost (each user
gets their own overcloud bridge) and in the quickstart (each user gets
their own underlcoud_external_network_cidr).

- https://review.openstack.org/441559 modify basic test to not require 
quickstart-extras
- https://review.openstack.org/441560 use a non-default virthost_user for the 
basic test
- https://review.openstack.org/441561 restore support for multiple deployments 
on virthost
- https://review.openstack.org/441562 improve generated ssh configuration
- https://review.openstack.org/441563 derive overcloud_public_vip and 
overcloud_public_vip6
- https://review.openstack.org/441564 yaml cleanup and formatting
- https://review.openstack.org/441565 don't make ssh_config files executable
- https://review.openstack.org/441566 restrict bashate to files in repository
- https://review.openstack.org/441567 restrict pep8 to files in repository
- https://review.openstack.org/441568 fix ansible-lint error ANSIBLE0012
- https://review.openstack.org/439133 define non_root_group explicitly

-- 
Lars Kellogg-Stedman  | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][infra] does someone cares about Jenkins? I stopped.

2017-03-03 Thread Jeffrey Zhang
thanks for noticing the gate related issue.

for #1, feel free to push a patch to improve this. Kolla is very open for
code
change and gate change. So if you have any idea, try to push it.

for #2, yes. Network causes some issue when building or pulling images.
OpenStack-infra provide lots of mirrors already and Kolla is already
configured
to use them[0]. But the issue is Kolla depends on lots of other repo. like
elasticsearch, rdo trunk repo etc. I hope OpenStack-infra could add more
mirror
repos, but it should be hard to mirror those, especially for RDO trunk.

on the other hand, when building kolla images, yum/apt are trying to run
clean
for each image, and most images are trying to install the same package, for
example, nova-compute and nova-libvirt are installing qemu both. So I am
thinking set up a proxy with cache in gate's VM, and it will speed up then
building
and reduce the issue caused network. I will try this.

At the end, hope more guys could pay attention to the Kolla gate issue and
try to
improve it.

[0]
https://github.com/openstack/kolla/blob/master/tools/setup_gate.sh#L52,L77


On Fri, Mar 3, 2017 at 6:12 AM, Marcin Juszkiewicz <
marcin.juszkiew...@linaro.org> wrote:

> W dniu 02.03.2017 o 20:19, Joshua Harlow pisze:
> >> 1. Kolla output is nightmare to debug.
> >>
> >> There is --logs-dir option to provide separate logs for each image build
> >> but it is not used. IMHO it should be as digging through such logs is
> >> easier.
> >>
> >
> > I to find the kolla output a bit painful, and I'm willing to help
> > improve it, what would you think is a better approach (that we can try
> > to get to).
>
> Once I discovered --logs-dir option I stopped caring of normal kolla
> output. If Jenkins jobs could be changed to make use of it and to
> provide those logs it would make not only me happy.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc] Stable/Ocata Version

2017-03-03 Thread Jeffrey Zhang
any update for releasing stable/ocata branch or tag? It is Mar already.

On Tue, Feb 21, 2017 at 1:23 AM, Henry Fourie 
wrote:

> Gary,
>
>The plan is to have a stable/ocata branch by end of month.
>
> -Louis
>
>
>
> *From:* Gary Kotton [mailto:gkot...@vmware.com]
> *Sent:* Sunday, February 19, 2017 4:29 AM
> *To:* OpenStack List
> *Subject:* [openstack-dev] [networking-sfc] Stable/Ocata Version
>
>
>
> Hi,
>
> When will this repo have a stable/ocata branch?
>
> Thanks
>
> Gary
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Containers] Creating launchpad bugs and correct tags

2017-03-03 Thread Jason Rist
On 03/02/2017 04:33 AM, Flavio Percoco wrote:
> On 02/03/17 10:57 +, Dougal Matthews wrote:
> > On 2 March 2017 at 10:40, Flavio Percoco  wrote:
> >
> >> Greetings,
> >>
> >> Just wanted to give a heads up that we're tagging all the containers
> >> related
> >> bugs with the... guess what?... containers tag. If you find an issue
> >> with
> >> one of
> >> the containers jobs or running tripleo on containers, please, file a bug
> >> and tag
> >> it accordingly.
> >>
> >
> > It might be worth adding it to the bug-tagging policy.
> > http://specs.openstack.org/openstack/tripleo-specs/specs/policy/bug-tagging.html#tags
> >
>
> Will do, thanks!
> Flavio
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
I moved it from the "other" tags to "official" tags here:
https://bugs.launchpad.net/tripleo/+manage-official-tags

-- 
Jason E. Rist
Senior Software Engineer
OpenStack User Interfaces
Red Hat, Inc.
Freenode: jrist
github/twitter: knowncitizen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] networking-ofagent officially

2017-03-03 Thread Armando M.
Hi neutrinos,

As stated a while back [1], it's about time to pull the trigger on the
retirement of networking-ofagent. Please find the retirement patches
available at [2]. Users of this repo must use the neutron OVS agent with
of_interface set to native to retain the same level of capability.

Cheers,
Armando

[1] http://lists.openstack.org/pipermail/openstack-dev/
2016-September/104676.html
[2] https://review.openstack.org/#/q/topic:ofagent+status:open
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][keystone] Pike PTG recap - quotas

2017-03-03 Thread Sean Dague
The irc conversations have been continuing in #openstack-dev, which has
had a solid mix of Nova and Keystone folks quite active in that.

Out of it we've started to form 2 documents in the keystone-specs
repository.

1) A high level spec on an unified limits approach -
https://review.openstack.org/#/c/440815/ - this is my attempt to take
what was presented at the PTG of existing ideas, and put them into some
common document. It stays out of the details of some of the interfaces
per say, and tries to get a bit of a higher level agreement.

2) one of the things that becomes clear is that folks want to
immediately jump into thinking about what the enforcement strategies for
hierarchical quotas will be. It turns out defining these rules, and the
error conditions, fro hierarchies that have more than depth 2 is hard.
Especially when the implications of computing usage comes out. Also,
personally, it's too much brain power for me to take pseudo code or even
ascii art, and visualize it in my head.

This document https://review.openstack.org/#/c/441203 uses blockdiag to
lay out a few models, and put names on them. *This is not complete*
however once we have models, scenarios in those models, what works and
doesn't, and have names which are more meaningful than overbooking
(which we realized was meaning different things to different people).
People are welcome to append their own models and ideas here.

At some point, we'll start talking about which model(s) OpenStack will
support. The answer might be multiple, but until we explore the space,
jumping to conclusions about what really fits here is going to be tough.

-Sean

On 03/01/2017 09:19 AM, Lance Bragstad wrote:
> FWIW - There was a lengthy discussion in #openstack-dev yesterday
> regarding this [0].
> 
> 
> [0] 
> http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2017-02-28.log.html#t2017-02-28T17:39:48
> 
> On Wed, Mar 1, 2017 at 5:42 AM, John Garbutt  > wrote:
> 
> On 27 February 2017 at 21:18, Matt Riedemann  > wrote:
> > We talked about a few things related to quotas at the PTG, some in
> > cross-project sessions earlier in the week and then some on Wednesday
> > morning in the Nova room. The full etherpad is here [1].
> >
> > Counting quotas
> > ---
> >
> > Melanie hit a problem with the counting quotas work in Ocata with
> respect to
> > how to handle quotas when the cell that an instance is running in
> is down.
> > The proposed solution is to track project/user ID information in the
> > "allocations" table in the Placement service so that we can get
> allocation
> > information for quota usage from Placement rather than the cell.
> That should
> > be a relatively simple change to move this forward and hopefully
> get the
> > counting quotas patches merged by p-1 so we have plenty of burn-in
> time for
> > the new quotas code.
> >
> > Centralizing limits in Keystone
> > ---
> >
> > This actually came up mostly during the hierarchical quotas
> discussion on
> > Tuesday which was a cross-project session. The etherpad for that
> is here
> > [2]. The idea here is that Keystone already knows about the project
> > hierarchy and can be a central location for resource limits so
> that the
> > various projects, like nova and cinder, don't have to have a
> similar data
> > model and API for limits, we can just make that common in
> Keystone. The
> > other projects would still track resource usage and calculate when
> a request
> > is over the limit, but the hope is that the calculation and
> enforcement can
> > be generalized so we don't have to implement the same thing in all
> of the
> > projects for calculating when something is over quota.
> >
> > There is quite a bit of detail in the nova etherpad [1] about
> overbooking
> > and enforcement modes, which will need to be brought up as options
> in a spec
> > and then projects can sort out what makes the most sense (there
> might be
> > multiple enforcement models available).
> >
> > We still have to figure out the data migration plan to get limits
> data from
> > each project into Keystone, and what the API in Keystone is going
> to look
> > like, including what this looks like when you have multiple compute
> > endpoints in the service catalog, or regions, for example.
> >
> > Sean Dague was going to start working on the spec for this.
> >
> > Hierarchical quota support
> > --
> >
> > The notes on hierarchical quota support are already in [1] and [2]. We
> > agreed to not try and support hierarchical quotas in Nova until we
> were
> > using 

Re: [openstack-dev] [Murano] Errors on $.instance.deploy() ... WHEN RUNNING WITHOUT MURANO-AGENT RABBIT

2017-03-03 Thread Stan Lagun
Greg,

you're right in everything you said.

This was fixed in master in https://review.openstack.org/#/c/387993/ with
intent to fix tests and unblock the gate. But since those tests were
introduced in Ocata the fix was not backported to Newton.
So I've just did a partial packport of the root cause fix. Here it is:
https://review.openstack.org/#/c/441477/


Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis



On Fri, Mar 3, 2017 at 11:49 AM, Waines, Greg 
wrote:

> Looking for guidance on fixing the following error (end of email) that we
> are seeing when ‘deploying’ a very simple murano package/app on a Murano
> deployment WITHOUT the second rabbit server for murano-agent.
>
>
>
> So we have a NEWTON-version of MURANO integrated into our OpenStack
> solution.
>
>
>
> We have modified murano.conf:
>
>  …
>
>  # Disallow the use of murano-agent (boolean value)
>
>  disable_murano_agent = true
>
>  …
>
>
>
> We are NOT even running the second rabbit server for communication with
> murano-agent.
>
>
>
> With a very simple murano package/app, that basically just does a “
> $.instance.deploy() “ in its deploy method of its main class.
>
> *( NOTE:  I have tested this murano package/app on other Murano setups and
> it works … although those Murano setups had murano-agent and second rabbit
> enabled. )*
>
>
>
> And we get the following traceback error (see end of email).
>
>
>
> Questions:
>
> - I believe this is a valid configuration
>
> - but does anyone actually run Murano in this type of setup ?   I am
> guessing not many … so is this an upstream murano bug that I am seeing ? …
> that nobody else sees because basically no one runs in this mode ?
>
>
>
> - if this should work, and works in other setups,
>
>   … any guidance on how to resolve / debug what is going on here ?
>
>
>
>
>
> Thanks in advance,
>
> Greg.
>
>
>
>
>
>
>
> 2017-03-03 17:46:35.175 12568 ERROR murano.common.engine [-]
>
>   AttributeError: 'AgentListener' object has no attribute '_results_queue'
>
>   Traceback (most recent call last):
>
> File "/tmp/murano-packages-cache/io.murano/0.0.0/
> 3317e706ecd1417bb748361a6a3385d2/Classes/Environment.yaml", line 120:9 in
> method deploy of type io.murano.Environment
>
> $.applications.pselect($.deploy())
>
> File "/tmp/murano-packages-cache/wrs.titanium.murano.examples.
> VmFip_NoAppDeploy/0.0.0/829a861c408a4516b0589d04cce232
> 48/Classes/VmFip_NoAppDeploy.yaml", line 41:13 in method deploy of type
> wrs.titanium.murano.examples.VmFip_NoAppDeploy
>
> $.instance.deploy()
>
> File "/tmp/murano-packages-cache/io.murano/0.0.0/
> 3317e706ecd1417bb748361a6a3385d2/Classes/resources/Instance.yaml", line
> 193:9 in method deploy of type io.murano.resources.Instance
>
> $this.beginDeploy()
>
> File "/tmp/murano-packages-cache/io.murano/0.0.0/
> 3317e706ecd1417bb748361a6a3385d2/Classes/resources/Instance.yaml", line
> 131:28 in method beginDeploy of type io.murano.resources.Instance
>
> $.prepareUserData()
>
> File "/tmp/murano-packages-cache/io.murano/0.0.0/
> 3317e706ecd1417bb748361a6a3385d2/Classes/resources/LinuxMuranoInstance.yaml",
> line 14:19 in method prepareUserData of type io.murano.resources.
> LinuxMuranoInstance
>
> $.generateUserData()
>
> File "/tmp/murano-packages-cache/io.murano/0.0.0/
> 3317e706ecd1417bb748361a6a3385d2/Classes/resources/LinuxMuranoInstance.yaml",
> line 81:31 in method generateUserData of type io.murano.resources.
> LinuxMuranoInstance
>
> $region.agentListener.queueName()
>
> File "/usr/lib/python2.7/site-packages/murano/dsl/helpers.py", line
> 58 in method evaluate
>
> for d_key, d_value in six.iteritems(value))
>
> File "/usr/lib/python2.7/site-packages/yaql/language/utils.py", line
> 122 in method __init__
>
> self._d = dict(*args, **kwargs)
>
> File "/usr/lib/python2.7/site-packages/murano/dsl/helpers.py", line
> 58 in method 
>
> for d_key, d_value in six.iteritems(value))
>
> File "/usr/lib/python2.7/site-packages/murano/dsl/helpers.py", line
> 53 in method evaluate
>
> return value(context)
>
> File "/usr/lib/python2.7/site-packages/murano/dsl/yaql_expression.py",
> line 85 in method __call__
>
> return self._parsed_expression.evaluate(context=context)
>
> File "/usr/lib/python2.7/site-packages/yaql/language/expressions.py",
> line 165 in method evaluate
>
> return self(utils.NO_VALUE, context, self.engine)
>
> File "/usr/lib/python2.7/site-packages/yaql/language/expressions.py",
> line 156 in method __call__
>
> return super(Statement, self).__call__(receiver, context, engine)
>
> File "/usr/lib/python2.7/site-packages/yaql/language/expressions.py",
> line 37 in method __call__
>
> return context(self.name, engine, receiver, context)(*self.args)
>
> File 

Re: [openstack-dev] [all] [api] API-WG PTG recap

2017-03-03 Thread Chris Dent

On Fri, 3 Mar 2017, Chris Dent wrote:


* I produce a next version of the guidelines integrating the
 feedback.


I've pushed out a new version that tries to integrate some chunk of
the feedback. Probably missed some. Please comment as required:

https://review.openstack.org/#/c/421846/

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Disabling glance v1 by default in devstack

2017-03-03 Thread Andrea Frittoli
On Fri, Mar 3, 2017 at 7:38 PM Matt Riedemann  wrote:

> I've got a change proposed to disable glance v1 by default in devstack
> for Pike [1].
>
> The glance v1 API has been deprecated for awhile now. Nova started
> supporting glance v2 in Newton and removed the ability to use nova with
> glance v1 in Ocata.
>
> It also turns out that Tempest will do things with glance v1 over v2
> during test setup of glance v1 is available, so we're missing out on
> some v2 coverage.
>
+1. We should really test v2 as opposed to v1.


>
> The time has come to change the default. If you have a project or CI
> jobs that require glance v1, first, you should probably start moving to
> v2, and second, you can re-enable this by setting GLANCE_V1_ENABLED=True
> in the settings/localrc for your devstack plugin.
>
> [1] https://review.openstack.org/#/c/343129/
>
> --
>
> Thanks,
>
> Matt Riedemann
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] PTG Recap

2017-03-03 Thread Kendall Nelson
Thanks Sean & Jay!

And for those of you that weren't able to make it to the PTG or if there
was a conversation you couldn't make it to, here is the Cinder youtube
channel with all of our discussions from the first two days [1], nova
discussion included. I added topics to the descriptions of the videos so
its easier to see what we discussed in each video.

Special thanks to Walt for setting up the streaming :)

- Kendall Nelson (diablo_rojo)

[1] https://www.youtube.com/channel/UCJ8Koy4gsISMy0qW3CWZmaQ

On Fri, Mar 3, 2017 at 1:45 PM Sean McGinnis  wrote:

> I leiu of a lengthy email that very few will read, this is just a
> pointer.
>
> Jay Bryant (jungleboyj) did a great job during the PTG of capturing
> notes and action items from our discussions. One of the items discussed
> was around improving capturing these details and making them easier to
> find months and years down the road when we've all forgotten. Basically
> a way to avoid the usual statement of "Didn't we discuss this in $CITY".
>
> As part of that, in addition to keeping our etherpads with all the
> notes, we have started a section on the Cinder wiki to make this info
> easy to find, with links to specific PTG and Forum notes from each
> event [1].
>
> Thanks Jay for setting this up.
>
> The recap for the first event can be found here: [2]
>
> Sean (smcginnis)
>
> [1]
> https://wiki.openstack.org/wiki/Cinder#PTG_and_Summit_Meeting_Summaries
> [2] https://wiki.openstack.org/wiki/CinderPikePTGSummary
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][diskimage-builder] Status of diskimage-builder

2017-03-03 Thread Jeremy Stanley
On 2017-03-03 13:27:16 -0600 (-0600), Gregory Haynes wrote:
[...]
> I hadn't heard anything to the effect of infra not wanting us, but
> AFAIK none of us has stepped up to really ask. One issue with
> infra is that, typically, OpenStack projects do not depend
> directly on infra projects. I am sure others have a better idea of
> the pitfalls here. OTOH we have a pretty large shared set of
> knowledge between DIB and infra which makes this option fairly
> attractive.
[...]

While it may not be a perfect match, we can probably make it work if
that's a route you're interested in going. If nothing else, there's
a fair amount of overlap between the people working on DIB and on
general Infra-oriented tooling.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Errors on $.instance.deploy() ... WHEN RUNNING WITHOUT MURANO-AGENT RABBIT

2017-03-03 Thread Waines, Greg
Looking for guidance on fixing the following error (end of email) that we are 
seeing when ‘deploying’ a very simple murano package/app on a Murano deployment 
WITHOUT the second rabbit server for murano-agent.

So we have a NEWTON-version of MURANO integrated into our OpenStack solution.

We have modified murano.conf:
 …
 # Disallow the use of murano-agent (boolean value)
 disable_murano_agent = true
 …

We are NOT even running the second rabbit server for communication with 
murano-agent.

With a very simple murano package/app, that basically just does a “ 
$.instance.deploy() “ in its deploy method of its main class.
( NOTE:  I have tested this murano package/app on other Murano setups and it 
works … although those Murano setups had murano-agent and second rabbit 
enabled. )

And we get the following traceback error (see end of email).

Questions:
- I believe this is a valid configuration
- but does anyone actually run Murano in this type of setup ?   I am 
guessing not many … so is this an upstream murano bug that I am seeing ? … that 
nobody else sees because basically no one runs in this mode ?

- if this should work, and works in other setups,
  … any guidance on how to resolve / debug what is going on here ?


Thanks in advance,
Greg.



2017-03-03 17:46:35.175 12568 ERROR murano.common.engine [-]
  AttributeError: 'AgentListener' object has no attribute '_results_queue'
  Traceback (most recent call last):
File 
"/tmp/murano-packages-cache/io.murano/0.0.0/3317e706ecd1417bb748361a6a3385d2/Classes/Environment.yaml",
 line 120:9 in method deploy of type io.murano.Environment
$.applications.pselect($.deploy())
File 
"/tmp/murano-packages-cache/wrs.titanium.murano.examples.VmFip_NoAppDeploy/0.0.0/829a861c408a4516b0589d04cce23248/Classes/VmFip_NoAppDeploy.yaml",
 line 41:13 in method deploy of type 
wrs.titanium.murano.examples.VmFip_NoAppDeploy
$.instance.deploy()
File 
"/tmp/murano-packages-cache/io.murano/0.0.0/3317e706ecd1417bb748361a6a3385d2/Classes/resources/Instance.yaml",
 line 193:9 in method deploy of type io.murano.resources.Instance
$this.beginDeploy()
File 
"/tmp/murano-packages-cache/io.murano/0.0.0/3317e706ecd1417bb748361a6a3385d2/Classes/resources/Instance.yaml",
 line 131:28 in method beginDeploy of type io.murano.resources.Instance
$.prepareUserData()
File 
"/tmp/murano-packages-cache/io.murano/0.0.0/3317e706ecd1417bb748361a6a3385d2/Classes/resources/LinuxMuranoInstance.yaml",
 line 14:19 in method prepareUserData of type 
io.murano.resources.LinuxMuranoInstance
$.generateUserData()
File 
"/tmp/murano-packages-cache/io.murano/0.0.0/3317e706ecd1417bb748361a6a3385d2/Classes/resources/LinuxMuranoInstance.yaml",
 line 81:31 in method generateUserData of type 
io.murano.resources.LinuxMuranoInstance
$region.agentListener.queueName()
File "/usr/lib/python2.7/site-packages/murano/dsl/helpers.py", line 58 in 
method evaluate
for d_key, d_value in six.iteritems(value))
File "/usr/lib/python2.7/site-packages/yaql/language/utils.py", line 122 in 
method __init__
self._d = dict(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/murano/dsl/helpers.py", line 58 in 
method 
for d_key, d_value in six.iteritems(value))
File "/usr/lib/python2.7/site-packages/murano/dsl/helpers.py", line 53 in 
method evaluate
return value(context)
File "/usr/lib/python2.7/site-packages/murano/dsl/yaql_expression.py", line 
85 in method __call__
return self._parsed_expression.evaluate(context=context)
File "/usr/lib/python2.7/site-packages/yaql/language/expressions.py", line 
165 in method evaluate
return self(utils.NO_VALUE, context, self.engine)
File "/usr/lib/python2.7/site-packages/yaql/language/expressions.py", line 
156 in method __call__
return super(Statement, self).__call__(receiver, context, engine)
File "/usr/lib/python2.7/site-packages/yaql/language/expressions.py", line 
37 in method __call__
return context(self.name, engine, receiver, context)(*self.args)
File "/usr/lib/python2.7/site-packages/yaql/language/contexts.py", line 65 
in method 
data_context, use_convention, function_filter)
File "/usr/lib/python2.7/site-packages/yaql/language/runner.py", line 49 in 
method call
name, all_overloads, engine, receiver, data_context, args, kwargs)
File "/usr/lib/python2.7/site-packages/yaql/language/runner.py", line 117 
in method choose_overload
args = tuple(arg_evaluator(i, arg) for i, arg in enumerate(args))
File "/usr/lib/python2.7/site-packages/yaql/language/runner.py", line 117 
in method 
args = tuple(arg_evaluator(i, arg) for i, arg in enumerate(args))
File "/usr/lib/python2.7/site-packages/yaql/language/runner.py", line 113 
in method 
and not isinstance(arg, expressions.Constant))
File 

Re: [openstack-dev] [placement][nova] PTG summary

2017-03-03 Thread Matt Riedemann

On 3/3/2017 10:49 AM, Jay Pipes wrote:


Implementation news
---

Discussions at the PTG identified that in order to actually implement
priority #1, however, we would need to complete #2 first :)

And so, we are currently attempting to get the os-traits library in
shape [4], getting the nova-spec approved for the placement traits API
[5] and getting the traits implementation out of WIP mode [6].

Once the traits work is complete, the shared storage providers work can
be resumed [7].

Once that work is complete, we will move on to the aforementioned nested
resource providers work as well as integration with the nova-scheduler
for traits and shared providers.



The os-traits patches are all merged and the 0.2.0 release request is here:

https://review.openstack.org/#/c/441437/

--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] neutron-lib impact: ServicePluginBase and PluginInterface moved

2017-03-03 Thread Boden Russell
Head's up: the ServicePluginBase and PluginInterface classes are being
removed from neutron.

- ServicePluginBase is available in neutron-lib.
- PluginInterface is likely going to remain private in neutron-lib;
pretty much everyone is using ServicePluginBase anyway.

A patch is proposed to neutron [1] consuming the above changes and
patches have been proposed to stadium projects for the same [2]. For all
other projects using these classes, please sync-up and consume from
neutron-lib.

We'll hold off on landing [1] until consumers get a chance to sync-up.

For a deeper dive (and comments on) into why we're likely making
PluginInterface private, please see [3].

Thanks


[1] https://review.openstack.org/#/c/441129/
[2] https://review.openstack.org/#/q/message:+consume+ServicePluginBase
[3] https://review.openstack.org/#/c/424151/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] PTG Recap

2017-03-03 Thread Sean McGinnis
I leiu of a lengthy email that very few will read, this is just a
pointer.

Jay Bryant (jungleboyj) did a great job during the PTG of capturing
notes and action items from our discussions. One of the items discussed
was around improving capturing these details and making them easier to
find months and years down the road when we've all forgotten. Basically
a way to avoid the usual statement of "Didn't we discuss this in $CITY".

As part of that, in addition to keeping our etherpads with all the
notes, we have started a section on the Cinder wiki to make this info
easy to find, with links to specific PTG and Forum notes from each
event [1].

Thanks Jay for setting this up.

The recap for the first event can be found here: [2]

Sean (smcginnis)

[1] https://wiki.openstack.org/wiki/Cinder#PTG_and_Summit_Meeting_Summaries
[2] https://wiki.openstack.org/wiki/CinderPikePTGSummary

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Disabling glance v1 by default in devstack

2017-03-03 Thread Matt Riedemann
I've got a change proposed to disable glance v1 by default in devstack 
for Pike [1].


The glance v1 API has been deprecated for awhile now. Nova started 
supporting glance v2 in Newton and removed the ability to use nova with 
glance v1 in Ocata.


It also turns out that Tempest will do things with glance v1 over v2 
during test setup of glance v1 is available, so we're missing out on 
some v2 coverage.


The time has come to change the default. If you have a project or CI 
jobs that require glance v1, first, you should probably start moving to 
v2, and second, you can re-enable this by setting GLANCE_V1_ENABLED=True 
in the settings/localrc for your devstack plugin.


[1] https://review.openstack.org/#/c/343129/

--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][diskimage-builder] Status of diskimage-builder

2017-03-03 Thread Gregory Haynes
Hello,

Thanks for bringing this back to life.

As I am sure some are aware I have been mostly absent from DIB lately,
so don't let me stop you all from going forward with this or any of the
other plans. I just wanted to do a bit of a braindump on my thought
process from a while back on why I never went through with trying to
become an independent openstack project team.

The main issue that prevented me from going forward with this was that I
worried we were too small for it to work effectively.  IME DIB tends to
have a fair amount of drive by contributors and a very small (roughly
2-3)  set of main contributors who are very part-time and who certainly
aren't primarily focused on DIB (or even upstream OpenStack).
Fortunately, I think the project does fine with this setup: The number
of new features scales up or down to meet our contributor capacity and
we haven't required a ton of firefighting in recent memory. Not only
that, we actually seem to be extremely stable in this setup which is
great given how we can break many other projects in ways which are non
trivial to debug.

Our low contributor capacity does pose some problems when you try to
become an OpenStack project team though. Firstly, someone needs to agree
to be PTL and, essentially, take the responsibilities seriously [1]. In
addition to the issue of having someone willing to do this, I worried
that the responsibilities would take up a non trivial amount of time
(for our low activity project) which previously went to other tasks
keeping the project afloat. I also was not sure we would be doing anyone
any favors if a cycle or two down the road we ended up in a spot where
no one is interested in running for PTL even though the project itself
is doing fine. Maybe some of the TC folks can correct me if i'm wrong
but that seems to create a fair bit of churn where a decision has to be
made on whether to attic the project or do something else like appoint a
PTL.

All that to say - If we decide to go the route of becoming on
independent openstack project would we have someone willing to be PTL
and do we think that would be an effective use of our time?



WRT us being consumed by glance or infra - I think either of these could
work. I hadn't heard anything to the effect of infra not wanting us, but
AFAIK none of us has stepped up to really ask. One issue with infra is
that, typically, OpenStack projects do not depend directly on infra
projects. I am sure others have a better idea of the pitfalls here. OTOH
we have a pretty large shared set of knowledge between DIB and infra
which makes this option fairly attractive.

My primary concern with glance is that AFAIK the only relation we've had
historically is the word 'image' in our project description. That is to
say, I don't know of any shared knowledge between the contributor base.
As a result I am not really a fan of this option.

For both of these its not really an issue of whether we'd like to 'own'
the project IMO (its all the same open source project after all, we
don't own it). It's mostly a matter of whether its technically feasible
(e.g. are there issues with infra due to things like dependencies) and
whether it makes any sense from a collaboration standpoint (otherwise
we'll end up right back where we are but with another parent project
team).



I'd like to propose a third option which I think may be best - We could
become an independent non-openstack project hosted by openstack infra.
This would allow us to effectively continue operating as we do today
which is IMO ideal. Furthermore, this would resolve some of the issues
we've had relating to the release process where we desired to be
release:independent and tag our own releases (we would no longer be of
the release team's concern rather than need to be special cased). I feel
like we've been effectively operating in this manner (a non openstack
independent project) so it seems a natural fit to me. Hopefully some of
the more openstack-process enlightened can chime in confirming that this
is doable and ok or if theres some big issues I am missing here...


HTH,
Greg

--

1: https://docs.openstack.org/project-team-guide/ptl.html


On Thu, Mar 2, 2017, at 03:31 PM, Emilien Macchi wrote:
> On Thu, Jan 12, 2017 at 3:06 PM, Yolanda Robla Mota 
> wrote:
> > From my point of view, i've been using that either on infra with
> > puppet-infracloud, glean.. and now with TripleO. So in my opinion, it shall
> > be an independent project, with core contributors from both sides.
> >
> > On Thu, Jan 12, 2017 at 8:51 PM, Paul Belanger 
> > wrote:
> >>
> >> On Thu, Jan 12, 2017 at 02:11:42PM -0500, James Slagle wrote:
> >> > On Thu, Jan 12, 2017 at 1:55 PM, Emilien Macchi 
> >> > wrote:
> >> > > On Thu, Jan 12, 2017 at 12:06 PM, Paul Belanger
> >> > >  wrote:
> >> > >> Greetings,
> >> > >>
> >> > >> With the containerization[1] of tripleo, I'd like to know more about
> >> > >> 

Re: [openstack-dev] [tripleo][diskimage-builder] Status of diskimage-builder

2017-03-03 Thread Ben Nemec



On 03/03/2017 03:25 AM, Ligong LG1 Duan wrote:

I am wondering whether DIB can become a component of Glance, as DIB is used to 
create OS images and Glance to upload OS images.


I see a big difference between creating images and storing them.  I 
can't imagine Glance would have any interest in owning dib, nor do I 
think they should.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][diskimage-builder] Status of diskimage-builder

2017-03-03 Thread Ben Nemec



On 03/02/2017 03:31 PM, Emilien Macchi wrote:

On Thu, Jan 12, 2017 at 3:06 PM, Yolanda Robla Mota  wrote:

From my point of view, i've been using that either on infra with
puppet-infracloud, glean.. and now with TripleO. So in my opinion, it shall
be an independent project, with core contributors from both sides.

On Thu, Jan 12, 2017 at 8:51 PM, Paul Belanger 
wrote:


On Thu, Jan 12, 2017 at 02:11:42PM -0500, James Slagle wrote:

On Thu, Jan 12, 2017 at 1:55 PM, Emilien Macchi 
wrote:

On Thu, Jan 12, 2017 at 12:06 PM, Paul Belanger
 wrote:

Greetings,

With the containerization[1] of tripleo, I'd like to know more about
the future of
diskimage-builder as it relates to the tripleo project.

Reading the recently approved spec for containers, container (image)
builds are
no longer under the control of tripleo; by kolla. Where does this
leave
diskimage-builder as a project under tripleo?  I specifically ask,
because I'm
wanting to start down the path of using diskimage-builder as an
interface to
containers.

Basically, is it time to move diskimage-builder out from the tripleo
project
into another, or its own? Or is tripleo wanting to more forward on
development
efforts on diskimage-builder.


Looking at stats on who is actively contributing into DIB:
http://stackalytics.com/?module=diskimage-builder

It seems that we have some folks from infra and some folks on dib
only, and a few contributors from TripleO.

I guess the best option is to ask DIB contributors: do you want to own
the project you're committing to?
If not, is it something that should stay in TripleO (imo no) or move
into openstack-infra (imo yes, if infra agrees).

With my PTL hat, I'm really open to this thing and I wouldn't mind to
transfer ownership to another group.


I was under the impression it was already it's own project team based
on:
http://lists.openstack.org/pipermail/openstack-dev/2016-July/099805.html

It looks like the change was never made in governance however.


Yes, it just looks like Greg created new core reviewers, not officially
breaking
away from tripleo.

If everybody is on board with moving diskimage-builder outside of tripleo,
we
need to decided where it lives. Two options come to mind:

1) Move diskimage-builder into own (big tent?) project. Setup a new PTL,
etc.


Let's move forward with this one if everybody agrees on that.

DIB folks: please confirm on this thread that you're ok to move out
DIB from TripleO and be an independent project.
Also please decide if we want it part of the Big Tent or not (it will
require a PTL).


I was +1 on splitting the review team out and I'm +1 on making it a 
completely separate project.  It's already functioning as one anyway, 
with its own meeting and IRC channel.





2) Move diskimage-builder into openstack-infra (fungi PTL).


I don't think Infra wants to carry this one.


3) Keep diskimage-builder under tripleo (EmilienM PTL).


We don't want to carry this one anymore for the reasons mentioned in
that thread.


Thoughts?


The reason I -1'd Paul's TripleO spec and suggested it be proposed to
diskimage-builder was due to:
http://lists.openstack.org/pipermail/openstack-dev/2016-June/098560.html
and
https://review.openstack.org/#/c/336109/

I just want to make sure the right set of reviewers who are driving
dib development see the spec proposal.

--
-- James Slagle
--


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Yolanda Robla Mota
NFV Partner Engineer
yrobl...@redhat.com
+34 605641639

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] state of the stable/mitaka branches

2017-03-03 Thread Jeremy Stanley
On 2017-03-02 09:56:28 -0800 (-0800), Ihar Hrachyshka wrote:
> On Thu, Mar 2, 2017 at 8:13 AM, Pavlo Shchelokovskyy
>  wrote:
> > I'm also kind of wondering what the grenade job in stable/newton will test
> > after mitaka EOL? upgrade from mitaka-eol tag to stable/newton branch? Then
> > even that might be affected if devstack-gate + project config will not be
> > able to set *_ssh in enabled drivers while grenade will try to use them.
> 
> When a branch is EOLed, grenade jobs using it for old side of the
> cloud are deprovisioned.

By way of explanation the reason for this is that once a branch is
no longer supported and gets closed for new changes, we're unable to
update it to keep it runnable such that we can avoid having it break
the "old" side of those upgrade jobs. From a downstream perspective,
it means that we recommend upgrading _before_ the version you're
running reaches EOL rather than after (because we no longer continue
testing the upgrade process from that point on).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] PTG summary

2017-03-03 Thread Ben Swartzlander
Here is a brief summary of the Manila PTG. For more details, check out 
the PTG etherpad [1].


Retrospective
-
The specs process was a success, but tweaks are needed, especially 
around "high priority" specs.


Ensure Share

This driver interface is currently being used wrong and needs a rewrite. 
We reached agreement on the changes that are needed but no one 
volunteered to write a spec and do the work. There are interactions with 
IPv6 so something needs to be done in Pike.


v1 API
--
Everyone is in favor of removing the v1 API. I've sent a notice to the 
operators list about the removal plans and we will proceed with deletion 
during pike.


IPv6

There are some unsolved problems related to testing but we are committed 
to solving these and adding IPv6 support during Pike. One approach to 
solving testing problems is to look at swapping out Ubuntu and using 
CentOS instead.


Incompatible Features
-
New features were once again added in Ocata without much regard for what 
existing features might not work well with them. While no specific 
incompatibilities are known, we need to comprehensively test the 
combinations and deal with any problems we find.


Experimental Features
-
This was a highly contentious issue with strong opinions on both sides. 
There remain good arguments for and against the continued use of 
"experimental" features. It's likely that we will evolve the exact 
mechanism for experimental APIs in the future because the existing 
mechanism has serious flaws. For feature development in general though 
there is not much support for feature branches or other styles of 
iterating on large features. We agreed to follow up on this topic in the 
weekly meeting to decide how to handle future experimental features.


Driver Tags
---
As discussed in Barcelona we plan to move ahead with the driver tags 
concept. I will be adding a maintainers file for the drivers with 
official information about driver maintenance status including 
maintainer contact info and CI status. We hope to use this mechanism to 
apply pressure to vendors to keep CI in good running shape, and to 
expand CI coverage as the community adds better tests (like scenario tests).


User Messages
-
jprovazn has volunteered to pick up this proposal and carry it forward. 
The main thing left to be done is to find a few more use cases for this 
feature so we can feel comfortable that it's not a generic solution to a 
specific problem.


Race Conditions
---
For Pike the plan is to keep increasing our usage of tooz locks and 
other tooz features. Also the snapshotting state will be added for shares.


Share Groups

There were many proposals to build new features on top of share groups 
but we agreed not to pursue any of those in Pike. Our priority is to 
make the feature work as well as the CG feature used to work and get the 
API fully supported.


Non-disruptive Share Manage
---
Ever since we added the "manage share" feature, there have been concerns 
that we don't address the use case of importing an in-use share in a 
non-disruptive way. Some drivers have implemented vendor-specific hacks 
to achieve this, but it leads to inconsistent behavior. We want to 
introduce tests that enforce common behavior across backends and also 
implement a common way to do non-disruptive "manage share" operations. 
The consensus is that we will pursue a 2-phase manage.


Share Replication
-
The main requirement to move this feature to non-experimental is correct 
implementation of replica quotas. Also we need to have a good plan for 
how to handle DHSS=true which involves changes to the share-networks 
interface.


Share Networks and Subnets
--
Share networks currently can't support either IPv6 or replication where 
the replicas are in different broadcast domains. A significant rework of 
these APIs is needed, and we need to provide backwards compatibility.


Tempest Integration/Decoupling
--
Ben wants to keep the tests and code in the same repo. We believe there 
is an approach that allows us to meet the needs of the tempest  and the 
rest of the Openstack community without moving our test code. The 
highest priority remains moving the stuff our tests depend on our of 
tempest into tempest-lib so we can un-pin our tempest commit.


Enabled Share Protocols in the UI
-
We need to make the set of enabled protocols discoverable so GUIs can 
prevent users choosing options that won't work. We agreed to address 
this with new public extra specs on share types.


Telemetry
-
vkmc volunteered to get our telemetry integration working.

Missing Public Extra Specs
--
The concept of public extra specs was added after some features which 
should have had public extra specs were added. In 

[openstack-dev] [Sahara][infra] Jenkins based 3rd party CIs

2017-03-03 Thread Telles Nobrega
Hello,

we from Sahara use the compatibility layer with Zuulv2.5 and we are
wondering if with the change to Zuulv3 this compatibility layer will still
be maintained.
If the layer is removed it will reflect into some changes on our side and
we are looking for this information to identify how much work will be
needed on our CI.

Thanks,
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [api] API-WG PTG recap

2017-03-03 Thread Chris Dent


The following attempts to be a summary of some of the API-WG related
ativity from last week's PTG. Despite some initial lack of
organization we managed to have several lively discussions in a room
that was occasionally standing room only.

I had intended to do a daily summary of my time at the PTG but
completely failed to do so. As with most OpenStack gatherings the
time was very compressed and completely exhausting. Fellow API-WG
core Ed Leafe manage a blog posting which includes some summary of
the API-WG period:

https://blog.leafe.com/atlanta-ptg-reflections/

We had initially planned to share a room with the architecture
working group, but at the last minute a room of our own (Georgia 5)
was made available to us. This led to some confusion on where people
were supposed to be when, but through the judicious use of IRC,
signs in the hallway and going and finding people who might help
with discussion, we managed to keep things moving. There also seemed
to be a degree of "I don't know where else to be so I'll hang with
the API-WG". This turned out to be great as it meant we had a lot of
diverse participation. To me that's the whole point of having these
gatherings, so: great success.

Because of sharing rooms with the arch-wg we also shared an initial
etherpad with them:

https://etherpad.openstack.org/p/ptg-architecture-workgroup

On there we formed an agenda and then used topic based etherpads for
most discussions:

* stability and compatibility guidelines:
  https://etherpad.openstack.org/p/api-stability-guidelines
* capabilities discovery:
  https://etherpad.openstack.org/p/capabilities-pike
* service catalog and service types:
  https://etherpad.openstack.org/p/service-catalog-pike

with some discussion for how/when to raise the minimum microversion
happening on the architecture etherpad.

Sections for each of these below.

# Stability/Compatibility Guidelines

https://etherpad.openstack.org/p/api-stability-guidelines

This topic was discussion related to the updates being made to the
guidelines for stability and compatibility in APIs happening at
https://review.openstack.org/#/c/421846/

There are plans for this to become the guidance for a voluntary tag
that asserts a service's API is stable. The passionate discussion
throughout the morning and into the afternoon was in large part
reaching some agreement about the similarities and differences in
meaning of the terms "stability", "compatibility" and
"interoperability" and how those meanings might change depending on
whether the person using the term was a developer, deployer or user
of OpenStack services.

In the end the main outcomes were:

* The definitions that matter to the terms above are the ones that
  impact the end user and that if we really want stability and
  interoperability for that class of people, change of any sort that
  is not clearly signalled is bad.

* Though microversions are contentious they are the tool we have at
  this time which does the job we want. However, care must be taken
  to not allow the presence of microversion to license constant
  change.

* It's accepted and acknowledged that when a service chooses to be
  stable it is accepting an increased level of development pain (and
  potential duplication and cruft in code) to minimize pain caused
  to end users.

* A service should not opt-in to stability until it is actually
  stable. A service which is already stable, but wants to experiment
  with functionality that it may not keep should put that
  functionality on a different endpoint (as in different service).

* People who voiced strong opinions at the meeting should comment on
  the review. Not much of this has happened yet.

* Strictness in stability is more important the more "public" the
  interface is. A deployer only interface is less public.

* It is considered normal practice to express a potentially
  different version with each different URL requested from a
  service. What should be true is that if you take that exact same
  code and use it against a service that supports the same versions
  it should "just work" (modulo policy).

* Supporting continuous deployment is part of the OpenStack way.
  This increases some of the developer-side pain mentioned above.

* We should document client side best practices that help ensure
  stability on that side too. For example evaluating success as 200
  <= resp.status < 300 instead of just 200 or 202 or 201 or
  whatever.

* The guideline should document more of the reasoning.

So: we landed somewhere pretty strict, but that strictness is
optional. A project that wants the tag should follow the guidelines
and a project that eventually wants the tag or wants to at least
strive for interoperability should be aware of the guidelines and
implement those it can.

Next Steps:

* People comment on the review
* I produce a next version of the guidelines integrating the
  feedback.

# Capabilities Discovery

https://etherpad.openstack.org/p/capabilities-pike


[openstack-dev] [placement][nova] PTG summary

2017-03-03 Thread Jay Pipes

Hi Stackers,

In Atlanta, there were a lot of discussions involving the new(ish) 
placement service. I'd like to summarize the topics of discussion and 
highlight what the team aims to get done in the Pike release.


A quick refresher
-

The placement service's mission is to provide a stable, generic 
interface for accounting of resources that are consumed in an OpenStack 
deployment. Though the placement service currently resides in the Nova 
codebase [1], our goal is to eventually lift the service out into its 
own repository. We do not yet have a date for this forklift, but the 
placement code has been written from the start to be decoupled from Nova.


Progress to date


To date, we've made good progress on the quantitative side of the 
placement API:


* nova-compute workers are reporting inventory records for resources 
they know about like vCPU, RAM and disk

* Admins can create custom resource classes through the placement REST API
* Providers of resources can be associated with each other via aggregate 
associations
* and nova-scheduler is now calling the placement REST API to filter the 
list of compute nodes that it inspects during scheduling decisions


We have a patch currently going through the final stages of review that 
integrates the Ironic virt driver with the placement API's custom 
resource classes [2]. This patch marks an important milestone for both 
Nova and Ironic with regards to how Ironic baremetal resources are 
accounted for in the system.


Priorities for Pike
---

At the PTG, we decided that the following are our highest priority focus 
areas (in order):


1) Completion of the shared resource provider modeling and implementation

Shared storage accounting is the primary use case here, along with 
Neutron routed networks.


2) Getting the qualitative side of the placement API done

As mentioned above, most work to-date has focused on the quantitative 
side of the request spec. The other side of the request spec is the 
qualitative one, which we're calling "traits". Providers of resources 
(compute nodes, Ironic baremetal nodes, SR-IOV NICs, FPGAs, routed 
network pools, etc) can be decorated with these string traits to 
indicate features/capabilities of the provider.


For example, a compute node might be decorated with the trait 
HW_CPU_X86_AVX2 or an SR-IOV NIC might be decorated with a trait 
indicating the physical network associated with the NIC.


The placement API will provide REST endpoints for managing these traits 
and their association with resource providers.


3) Merging support for nested resource providers concepts

Canonical examples of nested resource providers include SR-IOV PFs and 
NUMA nodes and sockets.


Much work for this has already been proposed in previous cycles [3]. We 
need to push forward with this and get it done.


Implementation news
---

Discussions at the PTG identified that in order to actually implement 
priority #1, however, we would need to complete #2 first :)


And so, we are currently attempting to get the os-traits library in 
shape [4], getting the nova-spec approved for the placement traits API 
[5] and getting the traits implementation out of WIP mode [6].


Once the traits work is complete, the shared storage providers work can 
be resumed [7].


Once that work is complete, we will move on to the aforementioned nested 
resource providers work as well as integration with the nova-scheduler 
for traits and shared providers.


Cinder
--

We had a nice discussion with folks in the Cinder team about what the 
placement service is all about and how Cinder can use it in the future. 
We've asked the Cinder team to help us identify block-storage-specific 
qualitative traits that can be standardized in the os-traits library. 
We're looking forward to helping the Cinder community do storage-aware 
scheduling affinity using the placement API in Queens and beyond.


Thanks all for reading!

Best,
-jay

[1] 
https://github.com/openstack/nova/tree/master/nova/api/openstack/placement

[2] https://review.openstack.org/#/c/437602/
[3] 
https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/nested-resource-providers
[4] 
https://review.openstack.org/#/q/project:openstack/os-traits+branch:master+topic:normalize

[5] https://review.openstack.org/#/c/345138/
[6] https://review.openstack.org/#/q/topic:bp/resource-provider-tags
[7] 
https://review.openstack.org/#/q/status:abandoned+project:openstack/nova+branch:master+topic:bp/shared-resources-pike


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl][release] Pike release management communications

2017-03-03 Thread Matt Riedemann

On 3/3/2017 9:56 AM, Thierry Carrez wrote:

Hello, fellow PTLs,

As Doug did for the past few cycles, I want to start the Pike cycle by
making sure the expectations for communications with the release team
are clear to everyone so there is no confusion or miscommunication about
any of the process or deadlines. This email is being sent to the
openstack-dev mailing list as well as the PTLs of all official OpenStack
projects individually, to improve the odds that all of the PTLs see it.
Note that future release-related emails will *not* be CCed to all
individual PTLs.

(If you were a PTL/release liaison last cycle, feel free to skip ahead
to the "things for you to do right now" section at the end.)

The release liaison for your project is responsible for coordinating
with the release management team, validating your team release requests,
and ensuring that the release cycle deadlines are met. If you don't
nominate a release liaison for your project (something I encourage you
to do), this task falls back to the PTL. Note that release liaisons do
not have to be core reviewers.

Please ensure that your liaison has the time and ability to handle the
communication necessary to manage your release: the release team is here
to facilitate, but finishing the release work is ultimately the
responsibility of the project team. Failing to follow through on a
needed process step may block you from successfully meeting deadlines or
releasing. In particular, our release milestones and deadlines are
date-based, not feature-based. When the date passes, so does the
milestone. If you miss it, you miss it. A few of you ran into problems
in past cycles because of missed communications. My goal is to have all
teams meet all deadlines during Pike. We came very very close for Ocata;
please help by keeping up to date on deadlines.

To ensure smooth coordination, we rely on three primary communication tools:

1. Email, for announcements and asynchronous communication.

The release management team will be using the "[release]" topic tag on
the openstack-dev mailing list for important messages. This includes
weekly countdown emails with details on focus, tasks, and upcoming
dates. As PTL or release liaison, you should ensure that you see and
read those messages (by configuring your mailing list subscription and
email client as needed) so that you are aware of all deadlines, process
changes, etc.

2. IRC, for time-sensitive interactions.

With more than 50 teams involved, the three members of the release team
can't track you each of you down when there is a deadline. We rely on
your daily presence in the #openstack-release IRC channel during
deadline weeks. You are, of course, welcome to stay in channel all the
time, but we need you to be there at least during deadline weeks.

3. Written documentation, for relatively stable information.

The release team has published the schedule for the Pike cycle to
http://releases.openstack.org/pike/schedule.html. Although I will
highlight dates in the countdown emails, you may want to add important
dates from the schedule to your calendar. One way to do that is to
subscribe to the ICS feed for community-wide deadlines:
https://releases.openstack.org/schedule.ics

Note that the Pike cycle overlaps with summer holidays in the northern
hemisphere. If you are planning time off, please make sure your duties
are being covered by someone else on the team. Its best to let the
release team know in advance so we don't delay approval for release
requests from someone we dont recognize, waiting for your +1.

Things for you to do right now:

1. Update your release liaison on
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Release_management

2. Make sure your IRC nickname and email address listed in
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml
are correct. The release team, foundation staff, and TC all use those
contact details to try to reach you at important points during the
cycle. Please make sure they are correct, and that the email address
delivers messages to a mailbox you check regularly.

3. Update your mail filters to ensure you see messages sent to the
openstack-dev list with [release] in the subject line.

4. Reply to this message, off-list, to me so I know that you have
received it. A simple “ack” is enough :)



ack

--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl][release] Pike release management communications

2017-03-03 Thread Matt Riedemann

On 3/3/2017 10:16 AM, Matt Riedemann wrote:

On 3/3/2017 9:56 AM, Thierry Carrez wrote:

Hello, fellow PTLs,

As Doug did for the past few cycles, I want to start the Pike cycle by
making sure the expectations for communications with the release team
are clear to everyone so there is no confusion or miscommunication about
any of the process or deadlines. This email is being sent to the
openstack-dev mailing list as well as the PTLs of all official OpenStack
projects individually, to improve the odds that all of the PTLs see it.
Note that future release-related emails will *not* be CCed to all
individual PTLs.

(If you were a PTL/release liaison last cycle, feel free to skip ahead
to the "things for you to do right now" section at the end.)

The release liaison for your project is responsible for coordinating
with the release management team, validating your team release requests,
and ensuring that the release cycle deadlines are met. If you don't
nominate a release liaison for your project (something I encourage you
to do), this task falls back to the PTL. Note that release liaisons do
not have to be core reviewers.

Please ensure that your liaison has the time and ability to handle the
communication necessary to manage your release: the release team is here
to facilitate, but finishing the release work is ultimately the
responsibility of the project team. Failing to follow through on a
needed process step may block you from successfully meeting deadlines or
releasing. In particular, our release milestones and deadlines are
date-based, not feature-based. When the date passes, so does the
milestone. If you miss it, you miss it. A few of you ran into problems
in past cycles because of missed communications. My goal is to have all
teams meet all deadlines during Pike. We came very very close for Ocata;
please help by keeping up to date on deadlines.

To ensure smooth coordination, we rely on three primary communication
tools:

1. Email, for announcements and asynchronous communication.

The release management team will be using the "[release]" topic tag on
the openstack-dev mailing list for important messages. This includes
weekly countdown emails with details on focus, tasks, and upcoming
dates. As PTL or release liaison, you should ensure that you see and
read those messages (by configuring your mailing list subscription and
email client as needed) so that you are aware of all deadlines, process
changes, etc.

2. IRC, for time-sensitive interactions.

With more than 50 teams involved, the three members of the release team
can't track you each of you down when there is a deadline. We rely on
your daily presence in the #openstack-release IRC channel during
deadline weeks. You are, of course, welcome to stay in channel all the
time, but we need you to be there at least during deadline weeks.

3. Written documentation, for relatively stable information.

The release team has published the schedule for the Pike cycle to
http://releases.openstack.org/pike/schedule.html. Although I will
highlight dates in the countdown emails, you may want to add important
dates from the schedule to your calendar. One way to do that is to
subscribe to the ICS feed for community-wide deadlines:
https://releases.openstack.org/schedule.ics

Note that the Pike cycle overlaps with summer holidays in the northern
hemisphere. If you are planning time off, please make sure your duties
are being covered by someone else on the team. Its best to let the
release team know in advance so we don't delay approval for release
requests from someone we dont recognize, waiting for your +1.

Things for you to do right now:

1. Update your release liaison on
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Release_management

2. Make sure your IRC nickname and email address listed in
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml

are correct. The release team, foundation staff, and TC all use those
contact details to try to reach you at important points during the
cycle. Please make sure they are correct, and that the email address
delivers messages to a mailbox you check regularly.

3. Update your mail filters to ensure you see messages sent to the
openstack-dev list with [release] in the subject line.

4. Reply to this message, off-list, to me so I know that you have
received it. A simple “ack” is enough :)



ack



Damn!

--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptl][release] Pike release management communications

2017-03-03 Thread Thierry Carrez
Hello, fellow PTLs,

As Doug did for the past few cycles, I want to start the Pike cycle by
making sure the expectations for communications with the release team
are clear to everyone so there is no confusion or miscommunication about
any of the process or deadlines. This email is being sent to the
openstack-dev mailing list as well as the PTLs of all official OpenStack
projects individually, to improve the odds that all of the PTLs see it.
Note that future release-related emails will *not* be CCed to all
individual PTLs.

(If you were a PTL/release liaison last cycle, feel free to skip ahead
to the "things for you to do right now" section at the end.)

The release liaison for your project is responsible for coordinating
with the release management team, validating your team release requests,
and ensuring that the release cycle deadlines are met. If you don't
nominate a release liaison for your project (something I encourage you
to do), this task falls back to the PTL. Note that release liaisons do
not have to be core reviewers.

Please ensure that your liaison has the time and ability to handle the
communication necessary to manage your release: the release team is here
to facilitate, but finishing the release work is ultimately the
responsibility of the project team. Failing to follow through on a
needed process step may block you from successfully meeting deadlines or
releasing. In particular, our release milestones and deadlines are
date-based, not feature-based. When the date passes, so does the
milestone. If you miss it, you miss it. A few of you ran into problems
in past cycles because of missed communications. My goal is to have all
teams meet all deadlines during Pike. We came very very close for Ocata;
please help by keeping up to date on deadlines.

To ensure smooth coordination, we rely on three primary communication tools:

1. Email, for announcements and asynchronous communication.

The release management team will be using the "[release]" topic tag on
the openstack-dev mailing list for important messages. This includes
weekly countdown emails with details on focus, tasks, and upcoming
dates. As PTL or release liaison, you should ensure that you see and
read those messages (by configuring your mailing list subscription and
email client as needed) so that you are aware of all deadlines, process
changes, etc.

2. IRC, for time-sensitive interactions.

With more than 50 teams involved, the three members of the release team
can't track you each of you down when there is a deadline. We rely on
your daily presence in the #openstack-release IRC channel during
deadline weeks. You are, of course, welcome to stay in channel all the
time, but we need you to be there at least during deadline weeks.

3. Written documentation, for relatively stable information.

The release team has published the schedule for the Pike cycle to
http://releases.openstack.org/pike/schedule.html. Although I will
highlight dates in the countdown emails, you may want to add important
dates from the schedule to your calendar. One way to do that is to
subscribe to the ICS feed for community-wide deadlines:
https://releases.openstack.org/schedule.ics

Note that the Pike cycle overlaps with summer holidays in the northern
hemisphere. If you are planning time off, please make sure your duties
are being covered by someone else on the team. Its best to let the
release team know in advance so we don't delay approval for release
requests from someone we dont recognize, waiting for your +1.

Things for you to do right now:

1. Update your release liaison on
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Release_management

2. Make sure your IRC nickname and email address listed in
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml
are correct. The release team, foundation staff, and TC all use those
contact details to try to reach you at important points during the
cycle. Please make sure they are correct, and that the email address
delivers messages to a mailbox you check regularly.

3. Update your mail filters to ensure you see messages sent to the
openstack-dev list with [release] in the subject line.

4. Reply to this message, off-list, to me so I know that you have
received it. A simple “ack” is enough :)

-- 
Thierry Carrez (ttx)
Release Management PTL

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 13

2017-03-03 Thread Chris Dent


This week's resource providers/placement update operates only
slightly as a summary of placement-related activity at least week's
PTG. We had a big etherpad of topics

https://etherpad.openstack.org/p/nova-ptg-pike-placement

and an entire afternoon (plus some extra time elsewhen) to cover
them, but really only addressed three of them (shared resource
handling, custom resource classes, traits) in any significant
fashion, touching on nested-resource providers a bit in the room and
claims in the placement service on the etherpad. Some summation of
that below.

# What Matters Most

A major outcome from the discussion was that the can_host/shared
concept will not be used for dealing with shared resources (such as
shared disk). Instead a resource provider that is a compute node
will be identified by the fact that it has a trait (the actual value
to be determined). When the compute node creates or updates its own
resource provider it will add that trait. When the nova-scheduler
asks for resources to filter it will include that trait.

This means that the traits spec (and its POC) is now priority one in
the placement universe:

https://review.openstack.org/#/c/345138/

# What's Changed

## Ironic Inventory

There was some debate about where in the layering of code within
nova-compute that the creation of custom resource classes should be
handled. Having these is necessary for the effective management of
ironic nodes. The discussion resulted in this new version of ironic
inventory handling:

https://review.openstack.org/#/c/437602/

## Nested Resource Providers

There was some discussion at a flipboard about the concept of
resource providers with multiple parents. We eventually decided
"let's not do that". There was also some vague discussion about
whether it was possible to express a hardware configuration that is
currently planned to be expressed as nested resource providers as
instead a custom resource class, along the lines of how bare metal
configurations are described. This was left unresolved, in part
because presumably hardware configuration is dynamic in some or
many cases.

# Main Themes

## Traits

There's been a decision to normalize trait names so they look a bit
more like custom resource classes. That work is at


https://review.openstack.org/#/q/status:open+project:openstack/os-traits+branch:master+topic:normalize

This is being done concurrently with the spec and code for traits
within placement/nova:


https://review.openstack.org/#/q/status:open+topic:bp/resource-provider-traits
https://review.openstack.org/#/q/status:open+topic:bp/resource-provider-tags

(That topic mismatch needs to be fixed.)

## Shared Resource Providers

As mentioned above, the plan on this work has changed, thus there is
currently no code in flight for it, but there is a blueprint:

https://blueprints.launchpad.net/nova/+spec/shared-resources-pike

## Nested Resource Providers

https://review.openstack.org/#/q/status:open+topic:bp/nested-resource-providers

## Docs

https://review.openstack.org/#/q/topic:cd/placement-api-ref

The start of creating an API ref for the placement API. Not a lot
there yet as I haven't had much of an opportunity to move it along.
There is, however, enough there for additional content to be
started, if people have the opportunity to do so. Check with me to
divvy up the work if you'd like to contribute.

## Claims in the Scheduler

We intended to talk about this at the PTG but we didn't get to it.
There was some discussion on the etherpad (linked above) but the
consensus was that planning for how to do this while the service
was a) still evolving, b) only just starting to do filtering was
premature: Anything we try to plan now will likely be wrong or at
least not aligned with eventual discoveries. We decided, instead,
that the right thing to do was to make what we've got immediately
planned work correctly and to get some real return on the promise of
the placement API (which in the immediate sense means getting shared
disk managed effectively).

## Performance

Another topic we didn't get to. We're aware that there are some
redundancies in the resource tracker that we'd like to clean up

http://lists.openstack.org/pipermail/openstack-dev/2017-January/110953.html

but it's also the case that we've done no performance testing on the
placement service itself. For example, consider the case where a
CERN-sized cloud is turned on (at Ocata) for the first time. Once
all the nodes have registered themselves as resource providers the
first request for some candidate destinations in the filter
scheduler will get back all those resource providers. That's
probably a waste on several dimensions and will get a bit loady.

We ought to model both these exterme cases and the common cases to
make sure there aren't unexpected performance drains.

## Microversion Handling on the Nova side

Matt identified that we'll need to be more conscious of micoversions
in nova-status, the 

Re: [openstack-dev] [infra] where the mail-man lives?

2017-03-03 Thread James E. Blair
"bogda...@mail.ru"  writes:

> The mail-man for the openstack-dev mail list is missing at least 'kolla'
> and 'development' checkboxes. It would be nice to make its filter case
> unsensitive as well, so it would match both 'All' and 'all' tags. How
> could we fix that? Any place to submit a PR?

Unfortunately that has to be changed through the mailman web interface.
I've CC'd the folks listed as contacts for the mailing list.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Zuul v3 - What's Coming: What to expect with the Zuul v3 Rollout

2017-03-03 Thread James E. Blair
"bogda...@mail.ru"  writes:

> That's great news! In-repo configs will speed up development for teams,
> with a security caveat for infrastructure team to keep in mind. The
> ansible runner CI node which runs playbooks for defined jobs, should not
> content sensitive information, like keys and secrets in files or
> exported env vars, unless they are a one time or limited in time. The
> same applies to the nodepool nodes allocated for a particular CI test
> run. Otherwise, a malformed patch could make ansible to cat/echo all of
> the secrets to the publicly available build logs.

Indeed that is a risk.  To mitigate that, we are building a restricted
execution environment for Ansible so that jobs defined in-repo will only
be allowed to access a per-job staging area on the runner.  And we also
plan on running that in a chrooted container.

These protections are not complete yet, which is why our test instance
at the moment is very limited in scope.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][release][ptl] Adding docs to the release schedule

2017-03-03 Thread Ian Cordasco
-Original Message-
From: Brian Rosmaita 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: March 2, 2017 at 16:55:10
To: OpenStack Development Mailing List (not for usage questions)

Subject:  Re: [openstack-dev] [docs][release][ptl] Adding docs to the
release schedule

> On 3/2/17 9:45 AM, Ian Cordasco wrote:
> > -Original Message-
> > From: Telles Nobrega
> > Reply: OpenStack Development Mailing List (not for usage questions)
> >
> > Date: March 2, 2017 at 08:01:29
> > To: OpenStack Development Mailing List (not for usage questions)
> >
> > Cc: openstack-d...@lists.openstack.org
> > Subject: Re: [openstack-dev] [docs][release][ptl] Adding docs to the
> > release schedule
> >
> >> I really believe that this idea will makes us work harder on keeping our
> >> docs in place and will make it for a better documented producted by release
> >> date.
> >> As shared before, I do believe that this is isn't easy and will demand a
> >> lot of effort from some teams, specially smaller teams with too much to do,
> >> but we from Sahara are on board with this approach and will try our best to
> >> do so.
> >
> > Most things worth doing are difficult. =) This seems to be one of
> > them. If deliverable teams really work together, they may end up in a
> > situation like Glance did this cycle where we kind of just sat on our
> > hands after RC-1 was tagged. That time *could* have been better spent
> > reviewing all of our documentation.
>
> At the risk of sounding defensive here, what exactly are you referring
> to? The api-ref [0], the dev docs in the glance repo [1], and the api
> docs in the glance-specs repo [2] all had patches up for review after RC-1.
>
> [0] https://review.openstack.org/#/c/426603/
> [1] https://review.openstack.org/#/c/429341/
> [2] https://review.openstack.org/#/c/426605/

I'm not trying to say we did something wrong. Just that there were
quite a few people who were focusing on things other than the Ocata
documentation. Hence the emphasis on "could". I think Glance came out
of Ocata looking great. I think we could look even better if we spent
more time on improving our documentation.

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Create backup with snapshots

2017-03-03 Thread yang, xing
In the original backup API design, volume_id is a required field.  In the CLI, 
volume_id is positional and required as well.  So when I added support to 
backup from a snapshot, I added snapshot_id as an optional field in the request 
body of the backup API.  While backup is in process, you cannot delete the 
volume.  Backup from snapshot and backup from volume are using the same API.  
So I think volume status should be changed to “backing-up” to be consistent.  
Now I’m thinking the status of the snapshot should be changed to “backing-up” 
too if snapshot_id is provided.

Thanks,
Xing



From: 王玺源 [wangxiyuan1...@gmail.com]
Sent: Thursday, March 2, 2017 10:21 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [cinder] Create backup with snapshots

Hi cinder team:
We met a problem about backup create recently.

The backup can be created from volumes or snapshots. In the both cases, the 
volume' s status is set to 'backing-up'.

But as I know, when users create backup with snapshots, the volume is not 
used(Correct me if I'm wrong). So why the volume's status is changed ? Should 
it keep available? It's a little strange that the volume is "backing-up" but 
actually only the snapshot is used for backup creation. the volume in 
"backing-up" means that it can't be used for some other actions. Such as: 
attach, delete, export to image, extend, create from volume, create backup from 
volume and so on.

So is there any reason we changed the volume' status here? Or is there any 
third part driver need the volume's status must be "backing-up" when create 
backup from snapshots?

Thanks!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Shaker] Shaker Image Builder fails in Ocata due to OS::Glance::Image deprecation

2017-03-03 Thread Ilya Shakhat
Hi Sai,

As we discussed at PTG, I've added an option to build image using
diskimage-builder tool. The code is not merged yet, please see
https://review.openstack.org/#/c/441126/. With this patch
shaker-image-builder can automatically switch to do local build using
diskimage-builder when Glance v1 is not available. The behavior can also be
enforced by --image-builder-mode parameter.

Thanks,
Ilya



2017-02-03 13:16 GMT+04:00 Ilya Shakhat :

> Starting Ocata, looks like only glance v2 is enabled by default. This
>> breaks the shaker image builder template since we make use of the resource
>> type OS::Glance::Image and creating images from url links is not
>> supported in v2. How do we want deal with this? Maybe have the user pass in
>> the name/image-id and pass them as properties to the shaker image builder
>> template or instead advise the use to turn on the v1 API?
>> Thoughts/suggestions?
>>
>>
> Looks like the only way to deal with that is to build the image manually
> and then pass its name via --image-name parameter (or name the image
> 'shaker-image').
>
> Thanks,
> Ilya
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] where the mail-man lives?

2017-03-03 Thread bogda...@mail.ru
The mail-man for the openstack-dev mail list is missing at least 'kolla'
and 'development' checkboxes. It would be nice to make its filter case
unsensitive as well, so it would match both 'All' and 'all' tags. How
could we fix that? Any place to submit a PR?


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gluon] Regarding multi-network and multi-tenant support

2017-03-03 Thread Georg Kunz
Hi Mohammad

> I have question regarding multi-networks and multi-tenants
> Currently Gluon supported only one network and subnet i.e GluonNetwork and 
> GluonSubnet respectively
> which i can see hard code values at path 
> https://github.com/openstack/gluon/blob/aa7edbf878c64829ef2e028c8cd0e5bb36ea1d51/gluon/plugin/core.py
>  #line number 164 and 174
> Please let know if you have plan to add mult-inetwork functionality ?

Thank you for checking out Gluon. The GluonNetwork and GluonSubnet you are 
referring to do not represent the (tenant) networks that Gluon can support. In 
fact, the actual (tenant) networks only exist in the respective networking 
backend (i.e. SDN controller) and are configured through the Proton APIs.

The GluonNetwork and GluonSubnet are currently used only as vehicles to expose 
the network ports which are configured via Gluon to Nova. Specifically, we 
collect all Gluon ports underneath the GluonNetworks and GluonSubnet because 
Neutron ports currently have to exist on a network and subnet according to the 
Neutron data model. In the networking backend, Gluon ports can belong to 
different (tenant) networks.

Best regards
Georg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Zuul v3 - What's Coming: What to expect with the Zuul v3 Rollout

2017-03-03 Thread bogda...@mail.ru
That's great news! In-repo configs will speed up development for teams,
with a security caveat for infrastructure team to keep in mind. The
ansible runner CI node which runs playbooks for defined jobs, should not
content sensitive information, like keys and secrets in files or
exported env vars, unless they are a one time or limited in time. The
same applies to the nodepool nodes allocated for a particular CI test
run. Otherwise, a malformed patch could make ansible to cat/echo all of
the secrets to the publicly available build logs.

> 
> From: Monty Taylor [mordred at inaugust.com]
> Sent: 01 March 2017 7:26
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] Zuul v3 - What's Coming: What to expect with
the   Zuul v3 Rollout
>
> ...
> * Self-testing In-Repo Job Config
> * Ansible Job Content
> ...

-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] release of stable/newton cookbook set

2017-03-03 Thread j.kl...@cloudbau.de

Hi,

i just wanted to let you all know that we recently released the stable/newton 
cookbook set.

For this cycle the released cookbook set includes:

cookbook-openstack-block-storage (deploying cinder)
cookbook-openstack-common (shared configuration and libraries)
cookbook-openstack-compute (deploying nova)
cookbook-openstack-dashboard (deploying horizon)
cookbook-openstack-identity (deploying keystone)
cookbook-openstack-image (deploying glance)
cookbook-openstack-integration-test (deploying tempest)
cookbook-openstack-network (deploying neutron)
cookbook-openstack-ops-database (shared database configuration)
cookbook-openstack-ops-messaging (shared mq configuration)
cookbook-openstack-orchestration (deploying heat)
cookbook-openstack-telemetry (deploying ceilometer and gnocchi)
openstack-chef-repo (combining all cookbooks and deployment specific 
configuration)

Sadly we were not able to release the cookbook-openstack-application-catalog 
(deploying Murano) this cycle, since we decided the deployment and the provided 
packages are not stable enough. We are however planning to release it with the 
stable/ocata cookbook set, but would be more than happy to get some support and 
more hands for the development and stabilisation of this.
During the ocata development cycle we are planning to add the compute 
placement-api and back port it also to stable/newton if possible. Additionally 
we will try to move as many projects as possible to be run via WSGI (as stated 
in our pike community goals).
Since we are currently down to only three core reviewers and not many more code 
contributors, we will reduce the efforts of adding new features to a minimum 
and focus on stabilisation and the minimal need feature set for ocata. If you 
would like to add a feature or even a whole project cookbook, please ping me or 
any other core in our irc channel or via this mailing list.

I will also be available during the OpenStack Ops Mid-cycle Meetup in Milan 
this month and will moderate the ops-config-management session (which of course 
will also be about all the other config management tools).

Cheers,
Jan__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][diskimage-builder] Status of diskimage-builder

2017-03-03 Thread Ligong LG1 Duan
I am wondering whether DIB can become a component of Glance, as DIB is used to 
create OS images and Glance to upload OS images.

Regards,
Ligong Duan 

-Original Message-
From: Matthew Thode [mailto:prometheanf...@gentoo.org] 
Sent: Friday, March 03, 2017 9:36 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tripleo][diskimage-builder] Status of 
diskimage-builder

On 03/02/2017 03:31 PM, Emilien Macchi wrote:
>>> 1) Move diskimage-builder into own (big tent?) project. Setup a new 
>>> PTL, etc.
> Let's move forward with this one if everybody agrees on that.
> 
> DIB folks: please confirm on this thread that you're ok to move out 
> DIB from TripleO and be an independent project.
> Also please decide if we want it part of the Big Tent or not (it will 
> require a PTL).
> 
>>> 2) Move diskimage-builder into openstack-infra (fungi PTL).
> I don't think Infra wants to carry this one.
> 
>>> 3) Keep diskimage-builder under tripleo (EmilienM PTL).
> We don't want to carry this one anymore for the reasons mentioned in 
> that thread.
> 

As a sometimes contributor to DIB for Gentoo stuff I'm fine with moving it out 
into it's own project under the big tent, with a PTL and all.

--
Matthew Thode (prometheanfire)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Freeze dates for Pike

2017-03-03 Thread Sylvain Bauza


Le 02/03/2017 23:46, Matt Riedemann a écrit :
> I mentioned this in the nova meeting today [1] but wanted to post to the
> ML for feedback.
> 
> We didn't talk about spec or feature freeze dates at the PTG. The Pike
> release schedule is [2].
> 
> Spec freeze
> ---
> 
> In Newton and Ocata we had spec freeze on the first milestone.
> 
> I'm proposing that we do the same thing for Pike. The first milestone
> for Pike is April 13th which gives us about 6 weeks to go through the
> specs we're going to approve. A rough look at the open specs in Gerrit
> shows we have about 125 proposed and some of those are going to be
> re-approvals from previous releases. We already have 16 blueprints
> approved for Pike. Keep in mind that in Newton we had ~100 approved
> blueprints by the freeze and completed or partially completed 64.
> 

I agree with you, letting more time for accepting more specs means less
time for merging more spec implementations.

That still leaves 6 weeks for accepting around 80 specs, which seems to
me enough.


> Feature freeze
> --
> 
> In Newton we had a non-priority feature freeze between n-1 and n-2. In
> Ocata we just had the feature freeze at o-3 for everything because of
> the short schedule.
> 
> We have fewer core reviewers so I personally don't want to cut off the
> majority of blueprints too early in the cycle so I'm proposing that we
> do like in Ocata and just follow the feature freeze on the p-3 milestone
> which is July 27th.
> 
> We will still have priority review items for the release and when push
> comes to shove those will get priority over other review items, but I
> don't think it's helpful to cut off non-priority blueprints before n-3.
> I thought there was a fair amount of non-priority blueprint code that
> landed in Ocata when we didn't cut it off early. Referring back to the
> Ocata blueprint burndown [3] most everything was completed between the
> 2nd milestone and feature freeze.
> 

Looks good to me too. Proposing and advertising one review sprint day
(or even two days) around pike-2 could also help us, because we could
have this kind of 'runway' between proposers and reviewers.

If so, after pike-2, we could just see how many blueprints are left for
the last milestone which could help us having a better vision on what we
could realistically feature for Pike.

My .02€
-Sylvain


> -- 
> 
> Does anyone have an issue with this plan? If not, I'll update [4] with
> the nova-specific dates.
> 
> [1]
> http://eavesdrop.openstack.org/meetings/nova/2017/nova.2017-03-02-21.00.log.html#l-119
> 
> [2] https://wiki.openstack.org/wiki/Nova/Pike_Release_Schedule
> [3]
> http://lists.openstack.org/pipermail/openstack-dev/2017-February/111639.html
> 
> [4] https://releases.openstack.org/pike/schedule.html
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] final day recordings for the Virtual Team Gathering

2017-03-03 Thread Antoni Segura Puimedon
Hi Kuryrs,

Thank you all for participating in the VTG. Here you can find the last
two recordings:

https://youtu.be/ti4oOK6p_Dw

https://youtu.be/iEdOTngEw4I

Let's discuss the priority of all the action items next week!

Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][diskimage-builder] Status of diskimage-builder

2017-03-03 Thread Simon Leinen
Emilien Macchi writes:
> DIB folks: please confirm on this thread that you're ok to move out
> DIB from TripleO and be an independent project.

As a DIB user (and occasional contributor of patches in the past) and
TripleO non-user, I'm in favor of the separation.

> Also please decide if we want it part of the Big Tent or not (it will
> require a PTL).

No opinion.
-- 
Simon.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][diskimage-builder] Status of diskimage-builder

2017-03-03 Thread Yolanda Robla Mota
As user and contributor for diskimage-builder, i think that being an
independent project, and moving to the big tent, will benefit it. +1 from
my side

On Fri, Mar 3, 2017 at 2:35 AM, Matthew Thode 
wrote:

> On 03/02/2017 03:31 PM, Emilien Macchi wrote:
> >>> 1) Move diskimage-builder into own (big tent?) project. Setup a new
> PTL,
> >>> etc.
> > Let's move forward with this one if everybody agrees on that.
> >
> > DIB folks: please confirm on this thread that you're ok to move out
> > DIB from TripleO and be an independent project.
> > Also please decide if we want it part of the Big Tent or not (it will
> > require a PTL).
> >
> >>> 2) Move diskimage-builder into openstack-infra (fungi PTL).
> > I don't think Infra wants to carry this one.
> >
> >>> 3) Keep diskimage-builder under tripleo (EmilienM PTL).
> > We don't want to carry this one anymore for the reasons mentioned in
> > that thread.
> >
>
> As a sometimes contributor to DIB for Gentoo stuff I'm fine with moving
> it out into it's own project under the big tent, with a PTL and all.
>
> --
> Matthew Thode (prometheanfire)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yolanda Robla Mota
NFV Partner Engineer
yrobl...@redhat.com
+34 605641639
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][gate][all] dsvm gate stability and scenario tests

2017-03-03 Thread Ghanshyam Mann
Thanks. +1. i added my list in ethercalc.

Left put scenario tests can be run on periodic and experimental job. IMO on
both ( periodic and experimental) to monitor their status periodically as
well as on particular patch if we need to.

-gmann

On Fri, Mar 3, 2017 at 4:28 PM, Andrea Frittoli 
wrote:

> Hello folks,
>
> we discussed a lot since the PTG about issues with gate stability; we need
> a stable and reliable gate to ensure smooth progress in Pike.
>
> One of the issues that stands out is that most of the times during test
> runs our test VMs are under heavy load.
> This can be the common cause behind several failures we've seen in the
> gate, so we agreed during the QA meeting yesterday [0] that we're going to
> try reducing the load and see whether that improves stability.
>
> Next steps are:
> - select a subset of scenario tests to be executed in the gate, based on
> [1], and run them serially only
> - the patch for this is [2] and we will approve this by the end of the day
> - we will monitor stability for a week - if needed we may reduce
> concurrency a bit on API tests as well, and identify "heavy" tests
> candidate for removal / refactor
> - the QA team won't approve any new test (scenario or heavy resource
> consuming api) until gate stability is ensured
>
> Thanks for your patience and collaboration!
>
> Andrea
>
> ---
> irc: andreaf
>
> [0] http://eavesdrop.openstack.org/meetings/qa/
> 2017/qa.2017-03-02-17.00.txt
> [1] https://ethercalc.openstack.org/nu56u2wrfb2b
> [2] https://review.openstack.org/#/c/439698/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev