[openstack-dev] [neutron][networking-l2gw] Unable to create release tag

2017-03-13 Thread Gary Kotton
Hi,
I was asked to create a release tag for stable/ocata. This fails with:

gkotton@ubuntu:~/networking-l2gw$ git push gerrit tag 10.0.0
Enter passphrase for key '/home/gkotton/.ssh/id_rsa':
Counting objects: 1, done.
Writing objects: 100% (1/1), 533 bytes | 0 bytes/s, done.
Total 1 (delta 0), reused 0 (delta 0)
remote: Processing changes: refs: 1, done
To ssh://ga...@review.openstack.org:29418/openstack/networking-l2gw.git
 ! [remote rejected] 10.0.0 -> 10.0.0 (prohibited by Gerrit)
error: failed to push some refs to 
'ssh://ga...@review.openstack.org:29418/openstack/networking-l2gw.git'

Any idea?
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Mistral Custom Actions API Design

2017-03-13 Thread Renat Akhmerov
So again, I’m for simplicity but that kind of simplicity that also allows 
flexibility in the future.

There’s one principle that I usually follow in programming that says:

“Space around code (absence of code) has more potential than the code itself.”

That means that it’s better to get rid of any stuff that’s not currently needed 
and add things
as requirements change. However, that doesn’t always work well in framework 
development
because the cost of initial inflexibility may become too high in future, that 
comes from the
need to stay backwards compatible. What I’m trying to say is that IMO it’s ok 
just to keep
it as simple as just a base class with method run() for now and think how we 
can add more
things in the future, if we need to, using mixin approach. So seems like it’s 
going to be:

class Action(object):

  def run(ctx):
…


class Mixin1(object):
  
  def method11():
…

  def method12():
…


class Mixin2(object):
  
  def method21():
…

  def method22():
…


Then my concrete action could use a combination of Action and any of the mixin:

class MyAction(Action, Mixin1):
  …


class MyAction(Action, Mixin2):
  …

or just


class MyAction(Action):
  …

Is this flexible enough or does it have any potential issues?

IMO, base class is still needed to define the contract that all actions should 
follow. So that
a runner knew what’s possible to do with actions.

Renat Akhmerov
@Nokia

> On 13 Mar 2017, at 16:49, lương hữu tuấn  wrote:
> 
> 
> 
> On Mon, Mar 13, 2017 at 9:34 AM, Thomas Herve  > wrote:
> On Fri, Mar 10, 2017 at 9:52 PM, Ryan Brady  > wrote:
> >
> > One of the pain points for me as an action developer is the OpenStack
> > actions[1].  Since they all use the same method name to retrieve the
> > underlying client, you cannot simply inherit from more than one so you are
> > forced to rewrite the client access methods.  We saw this in creating
> > actions for TripleO[2].  In the base action in TripleO, we have actions that
> > make calls to more than one OpenStack client and so we end up re-writing and
> > maintaining code.  IMO the idea of using multiple inheritance there would be
> > helpful.  It may not require the mixin approach here, but rather a design
> > change in the generator to ensure the method names don't match.
> 
> Is there any reason why those methods aren't functions? AFAICT they
> don't use the instance, they could live top level in the action module
> and be accessible by all actions. If you can avoid multiple
> inheritance (or inheritance!) you'll simplify the design. You could
> also do client = NovaAction().get_client() in your own action (if
> get_client was a public method).
> 
> --
> Thomas
> 
> If you want to do that, you need to change the whole structure of base action 
> and the whole way of creating an action
> as you have described and IMHO, i myself do not like this idea:
> 
> 1. Mistral is working well (at the standpoint of creating action) and 
> changing it is not a short term work.
> 2. Using base class to create base action is actually a good idea in order to 
> control and make easy to action developers. 
> The base class will define the whole mechanism to execute an action, 
> developers do not need to take care of it, just only
> providing OpenStack clients (the _create_client() method).
> 3. From the #2 point of view, the alternative to NovaAction().get_client() 
> does not make sense since the problem here is subclass mechanism,
> not the way to call get_client().
> 
> @Renat: I myself not against to multiple inheritance too, the only thing is 
> if we want to make it multiple inheritance, we should think about it more 
> thoroughly for the hierarchy of inheritance, what each inheritance layer 
> does, etc. These work will make the multiple inheritance easy to understand 
> and for action developers as well easy to develop. So, IMHO, i vote for make 
> it simple, easy to understand first (if you continue with mistral-lib) and 
> then do the next thing later.
> 
> Br,
> 
> Tuan/Nokia
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] What do we want to be when we grow up?

2017-03-13 Thread Joshua Harlow

Thierry Carrez wrote:

Joshua Harlow wrote:

[...]
* Be opinionated; let's actually pick *specific* technologies based on
well thought out decisions about what we want out of those technologies
and integrate them deeply (and if we make a bad decision, that's ok, we
are all grown ups and we'll deal with it). IMHO it hasn't turned out
well trying to have drivers for everything and everyone so let's umm
stop doing that.


About "being all grown-ups and dealing with it", the problem is that
it's mostly an externality: the choice is done by developers and the
cost of handling the bad decision is carried by operators. Externalities
make for bad decisions.


Fair point, so how do we make it not a externality (I guess this is 
where the core services arch-wg thread comes in?)? It all reminds me of 
the gimp and how its UI was really bad to use for years. Sometimes 
developers don't make the best decisions (and rightly so I will admit I 
sometimes don't either).




I agree that having drivers for everything is nonsense. The model we
have started to promote (around base services) is an expand/contract
model: start by expanding support to a couple viable options, and then
once operators / the market decides on one winner, contract to only
supporting that winner, and start using the specific features of that
technology.


Is it possible to avoid the expand/contract? I get that idea, but it 
seems awfully slow and drawn out... I'd almost rather pick a good enough 
solution, and devote a lot of resources into making it the best solution 
instead of choosing 2 solutions (neither very good) and then later 
picking 1 (by the time that happens, someone that picked solution #1 
would be quite a bit farther ahead of you).




The benefit is that the final choice ends up being made by the
operators. Yes, it means that at the start you will have to do with the
lowest common denominator. But frankly at this stage it would be awesome
to just have the LCD of DLMs, rather than continue disagreeing on
Zookeeper vs. etcd and not even having that lowest common denominator.


On a side note, whenever I heard operators or developers it makes me 
sad... Why do we continue to think there are two groups here? I'd almost 
like there to be some kind of rotation among *all* openstack folks where 
say individuals in the community rotate between companies to get a feel 
for what it means to operate & develop this beast.


Perhaps some kind of internship like thing (except call it something 
else), I'd certainly like to break down these walls that continue to be 
mentioned when I don't really think they need to exist...





* Leads others; we are one of the older cloud foundations (I think?) so
we should be leading others such as the CNCF and such, so we must be
heavily outreaching to these others and helping them learn from our
mistakes


We can always do more, but this is already happening. I was asked for
and provided early advice to the CNCF while they were setting up their
technical governance structure. Other foundations reached out to us to
discuss and adopt our vulnerability management models. There are a lot
more examples.


Is it theortically possible that we just merge with some of these 
foundations? Aren't we better as a bundle of twigs instead of our own 
stick?


"A single twig breaks, but the bundle of twigs is strong." - Tecumseh

Why aren't we leading the formation of that bundle?




[...]
* Full control of infrastructure (mostly discard it); I don't think we
necessarily need to have full control of infrastructure anymore. I'd
rather target something that builds on the layers of others at this
point and offers value there.


+1



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What do we want to be when we grow up?

2017-03-13 Thread Joshua Harlow

Monty Taylor wrote:

On 03/10/2017 11:39 PM, Joshua Harlow wrote:

* Interoperability - kept as is (though I can't really say how many
public clouds there are anymore to interoperate with).


There are plenty. As Dan Smith will tell you, I'm fond of telling people
just how  many OpenStack Public Cloud accounts I have.


How many do  you have, what are there user names and passwords?

Please paste secrets.yaml and clouds.yaml, mmmk

On another topic, can we integrate with something not a plain-text 
secret file in shade (or os-client-config?); I know keyring 
(https://pypi.python.org/pypi/keyring) was tried once, perhaps we should 
try it again?




This:

https://docs.openstack.org/developer/os-client-config/vendor-support.html

Is a non-comprehensive list. I continue to find new ones I didn't know
about. I also haven't yet gotten an account on Teuto.net to verify it.

There is also this:

https://www.openstack.org/marketplace/public-clouds/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] bug deputy report (Mar 6- Mar 13)

2017-03-13 Thread Das, Anindita
Hi All,

I was serving as bug deputy for the last week. There were 25 bugs and few RFE’s 
that were raised last week.

Of the 25 bugs 2 are still uncategorized, so help with them will be highly 
appreciated.


1.   https://bugs.launchpad.net/neutron/+bug/1672345 -- Loadbalancer V2 
ports are not serviced by DVR

2.   https://bugs.launchpad.net/neutron/+bug/1672433 -- dhcp-agent should 
send a grace ARP after assigning IP address in dhcp namespace

Thanks,
Anindita (irc: dasanind)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Admin or certain roles should be able to list full project subtree

2017-03-13 Thread Adrian Turjak
Hello Keystone Devs,

I've been playing with subtrees in Keystone for the last while, and one
thing that hit me recently is that as admin, I still can't actually do
subtree_as_list unless I have a role in all projects in the subtree.
This is kind of silly.

I can understand why this limitation was implemented, but it's also a
frustrating constraint because as an admin, I have the power to add
myself to all these projects anyway, why then can't I just list them?

Right now if I want to get a list of all the subtree projects I need to
do subtree_as_ids, then list ALL projects, and then go through that list
grabbing out only the projects I want. This is a pointless set of
actions, and having to get the full project list when I just need a
small subset is really wasteful.

Beyond the admin case, people may in fact want certain roles to be able
to see the full subtree regardless of access. In fact I have a role
'project_admin' which allows you to edit your own roles within the scope
of your project, including set those roles to inherit down, and create
subprojects. If you have the project_admin role, it would make sense to
see the full subtree regardless of your actually having access to each
element in the subtree or not.

Looking at the code in Keystone, I'm not entirely sure there is a good
way to set role based policy for this given how it was setup. Another
option might be to introduce a filter which allows listing of
subprojects. Project list is already an admin/cloud_admin only command
so there is no need to limit it, and the filter could be as simple as
'subtree=' and would make getting subtrees as admin, or a
given admin-like role, actually doable without the pain of roles everywhere.

The HMT stuff in Keystone is very interesting, and potentially very
useful, but it also feels like so many of the features are a little
half-baked. :(

Anyone with some insight into this and is there any interest in making
subtree listing more flexible/useful?

Cheers,
Adrian Turjak


Further reading:
https://github.com/openstack/keystone-specs/blob/master/specs/keystone/kilo/project-hierarchy-retrieval.rst
https://bugs.launchpad.net/keystone/+bug/1434916
https://review.openstack.org/#/c/167231



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-13 Thread Joshua Harlow



Clint Byrum wrote:

Excerpts from Fox, Kevin M's message of 2017-03-14 00:09:55 +:

With my operator hat on, I would like to use the etcd backend, as I'm already 
paying the cost of maintaining etcd clusters as part of Kubernetes. Adding 
Zookeeper is a lot more work.



It would probably be a good idea to put etcd in as a plugin and make
sure it is tested in the gate. IIRC nobody has requested the one thing
ZK can do that none of the others can (fair locking), but it's likely
there are bugs in both drivers since IIRC neither are enabled in any
gate.


Pretty sure etcd can do fair locking now :-P

Or reading 
https://coreos.com/etcd/docs/latest/v2/api.html#atomically-creating-in-order-keys 
it seems like we should be able to.


Honestly the APIs of both etcd and zookeeper seem pretty equivalent; 
etcd is a little more k/v oriented (and has key ttls, to a degree) but 
the rest seems nearly identical (which isn't a bad thing).


Etcd I think is also switching to grpc sometime in the future (afaik); 
that feature is in alpha/?beta?/experimental right now.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][nova][cinder] Testing of a microversioned world

2017-03-13 Thread Ghanshyam Mann
On Mon, Mar 13, 2017 at 7:37 PM, Andrea Frittoli 
wrote:

> On Sat, Mar 11, 2017 at 10:45 PM Matt Riedemann 
> wrote:
>
> On 3/10/2017 3:02 PM, Andrea Frittoli wrote:
> >
> > We had a couple of sessions related to this topic at the PTG [0][1].
> >
> > We agreed that we want to still maintain integration tests only in
> > Tempest, which means that API micro versions that have no integration
> > impact can be tested via functional tests.
>
> To be clear, "integration" here means tests that span operations across
> multiple services, correct? Like a compute test that first creates a
> port in the networking service and a volume in the block storage service
> and then uses those to create a server and maybe take a snapshot of it
> which is then verified was uploaded to the image service.
>
> Yes, indeed.
>
>
>
> The non-integration things are self-contained in a single service, like
> if all you need to do is create an aggregate, show it's details and
> validate the response, at a particular microversion, we can just do that
> in nova functional tests, and it's not necessary in Tempest.
>
> It might be worth having a definition of this policy in the Tempest docs
> so when people ask this question again you can just point at the docs.
>
>
> I agree, we need to go a better job at documenting this.
> I just want to avoid having a black and white rule that would not work.
>
> To me tests that should be in Tempest are tests that make sense to run
> against any change in any repo, and the reasons for that can be many:
> - the test involves multiple services (more that "it uses a token though")
> - the test covers a feature which is key for interoperability
> - the test must be reliable and not take too long
>
> ​+1. I added those in https://review.openstack.org/#/c/444727/​



> Based on this I don't it's possible to completely avoid the discussion
> about
> which test should in Tempest and which not, it's something to be considered
> on a test by test basis, based on a set of guidelines.
>
> We have an initial definition of scope in [0], but it's probably worth to
> elaborate
> more on it. I'll put up a patch so that the discussion on it can continue
> in
> gerrit.
>
> [0] https://docs.openstack.org/developer/tempest/test-
> removal.html#tempest-scope
>
>
> >
> > In terms of which versions we test in the gate, for nova we always run
> > with min_microversion = None and max_microversion = latest, which means
> > that all tests will be executed.
> > Since micro versions are incremental, and each micro version usually
> > involves no or one test on Tempest side, I think it will be a while
> > before this becomes an issue for the common gate.
>
> We test max_microversion=latest only on master. On the devstack stable
> branch we cap the max_microversion, e.g.:
>
> Thanks, good point.
>
>
>
> https://github.com/openstack-dev/devstack/blob/stable/
> newton/lib/tempest#L339
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-13 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2017-03-14 00:09:55 +:
> With my operator hat on, I would like to use the etcd backend, as I'm already 
> paying the cost of maintaining etcd clusters as part of Kubernetes. Adding 
> Zookeeper is a lot more work.
> 

It would probably be a good idea to put etcd in as a plugin and make
sure it is tested in the gate. IIRC nobody has requested the one thing
ZK can do that none of the others can (fair locking), but it's likely
there are bugs in both drivers since IIRC neither are enabled in any
gate.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Small steps for Go

2017-03-13 Thread Steve Gordon
- Original Message -
> From: "Steve Gordon" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Monday, March 13, 2017 8:11:34 PM
> Subject: Re: [openstack-dev] [all] Small steps for Go
> 
> - Original Message -
> > From: "Clint Byrum" 
> > To: "openstack-dev" 
> > Sent: Monday, March 13, 2017 1:44:19 PM
> > Subject: Re: [openstack-dev] [all] Small steps for Go
> > 
> > Excerpts from Davanum Srinivas's message of 2017-03-13 10:06:30 -0400:
> > > Update:
> > > 
> > > * We have a new git repo (EMPTY!) for the commons work -
> > > http://git.openstack.org/cgit/openstack/golang-commons/
> > > * The golang-client has little code, but lot of potential -
> > > https://git.openstack.org/cgit/openstack/golang-client/
> > > 
> > 
> > So, we're going to pretend gophercloud doesn't exist and continue to
> > isolate ourselves from every other community?
> 
> I'd add that gophercloud [1] is what the Kubernetes cloud provider framework
> implementation for OpenStack [2] uses to talk to the underlying cloud*. This
> would seem like a pretty good area for collaboration with other communities
> to expand on what is there rather than start over?
> 
> -Steve
> 
> [1] https://github.com/gophercloud/gophercloud
> [2]
> https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider/providers/openstack

Nevermind I see this train of thought also made its way to the etherpad...

-Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Small steps for Go

2017-03-13 Thread Steve Gordon
- Original Message -
> From: "Clint Byrum" 
> To: "openstack-dev" 
> Sent: Monday, March 13, 2017 1:44:19 PM
> Subject: Re: [openstack-dev] [all] Small steps for Go
> 
> Excerpts from Davanum Srinivas's message of 2017-03-13 10:06:30 -0400:
> > Update:
> > 
> > * We have a new git repo (EMPTY!) for the commons work -
> > http://git.openstack.org/cgit/openstack/golang-commons/
> > * The golang-client has little code, but lot of potential -
> > https://git.openstack.org/cgit/openstack/golang-client/
> > 
> 
> So, we're going to pretend gophercloud doesn't exist and continue to
> isolate ourselves from every other community?

I'd add that gophercloud [1] is what the Kubernetes cloud provider framework 
implementation for OpenStack [2] uses to talk to the underlying cloud*. This 
would seem like a pretty good area for collaboration with other communities to 
expand on what is there rather than start over?

-Steve

[1] https://github.com/gophercloud/gophercloud
[2] 
https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider/providers/openstack

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-13 Thread Fox, Kevin M
With my operator hat on, I would like to use the etcd backend, as I'm already 
paying the cost of maintaining etcd clusters as part of Kubernetes. Adding 
Zookeeper is a lot more work.

Thanks,
Kevin

From: Davanum Srinivas [dava...@gmail.com]
Sent: Monday, March 13, 2017 4:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

Folks,

Currently devstack defaults to zookeeper and is opinionated about it.
Talking to Josh, etcd seems to be a good option too.

Does anyone have specific experience with etcd with tooz?
Do we want to make DLM mandatory? (always run with tooz + backend)?

Asking because lots of the container related projects are using etcd
[1], so we may want to avoid both zookeeper and etcd?

Thoughts please. Note that this came up during the Atlanta PTG as well [2]

Thanks,
Dims

PS: the dragonflow one looks really good! [3]

[1] http://codesearch.openstack.org/?q=etcd=nope=devstack%2F.*=
[2] https://etherpad.openstack.org/p/ptg-architecture-workgroup
[3] http://git.openstack.org/cgit/openstack/dragonflow/tree/devstack/etcd_driver
--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-13 Thread Davanum Srinivas
Folks,

Currently devstack defaults to zookeeper and is opinionated about it.
Talking to Josh, etcd seems to be a good option too.

Does anyone have specific experience with etcd with tooz?
Do we want to make DLM mandatory? (always run with tooz + backend)?

Asking because lots of the container related projects are using etcd
[1], so we may want to avoid both zookeeper and etcd?

Thoughts please. Note that this came up during the Atlanta PTG as well [2]

Thanks,
Dims

PS: the dragonflow one looks really good! [3]

[1] http://codesearch.openstack.org/?q=etcd=nope=devstack%2F.*=
[2] https://etherpad.openstack.org/p/ptg-architecture-workgroup
[3] http://git.openstack.org/cgit/openstack/dragonflow/tree/devstack/etcd_driver
-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][searchlight] Sharing resource type implementations

2017-03-13 Thread Richard Jones
I'll definitely be looking at getting a searchlight-ui patch up for
the mirror side of my Horizon patch.

Double registration largely depends on which particular aspect of the
resource type is being looked at. Most of the resource type
registration will just be replaced (with identical information) but
the kicker will be table columns and actions which are added by append
(via extensible service), so they'll all be duplicated if both
registrations run. So ideally both searchlight-ui and Horizon would be
updated at the same time.


 Richard

On 11 March 2017 at 04:34, Tripp, Travis S  wrote:
> Hi Richard,
>
> I’m headed out for vacation so won’t be able to look through it until I get 
> back.  However, can you also please get an install of searchlight-ui running 
> so that you can see if anything breaks?  I know you don’t typically use 
> devstack, but the searchlight devstack plugin installs searchlight UI. [0]
>
> The one thing I’m not sure about is how the resource registry handles 
> potential double registrations.  So, if the resource is registered both code 
> bases, I don’t know what would get loaded.
>
> https://review.openstack.org/#/c/444095/2/openstack_dashboard/static/app/core/instances/instances.module.js
> https://github.com/openstack/searchlight-ui/blob/master/searchlight_ui/static/resources/os-nova-servers/os-nova-servers.module.js#L57
>
> [0] https://github.com/openstack/searchlight/tree/master/devstack
>
> Thanks,
> Travis
>
> On 3/9/17, 10:58 PM, "Richard Jones"  wrote:
>
> Thanks, Steve!
>
> I've put together an initial patch
> https://review.openstack.org/#/c/444095/ which pulls in the
> os-nova-servers module and a little extra to make it work in Horizon's
> codebase. I've tried to make minimal edits to the actual code -
> predominantly just editing module names. I've tested it and it mostly
> works on Horizon's side \o/
>
>
>  Richard
>
> On 10 March 2017 at 14:40, McLellan, Steven  
> wrote:
> > My expertise in this area is deeply suspect but as long as we maintain 
> the
> > mapping from the resource type names that searchlight uses 
> (os-nova-servers)
> > to the modules we'll be OK. If you or Rob put a patch up against 
> horizon I
> > (or a willing victim/volunteer) can test a searchlight-ui patch against 
> it.
> >
> >
> >  Original message 
> > From: Richard Jones 
> > Date: 3/9/17 21:13 (GMT-06:00)
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Subject: Re: [openstack-dev] [Horizon][searchlight] Sharing resource 
> type
> > implementations
> >
> > Hey folks,
> >
> > Another potential issue is that the searchlight module structure and
> > Horizon's module structure are different in a couple of respects. I
> > could just retain the module structure from searchlight
> > ('resources.os-nova-servers') or, preferably, I could rename those
> > modules to match the Horizon structure more closely
> > ('horizon.app.resources.os-nova-servers') or more strictly
> > ('horizon.app.core.instances').
> >
> > As far as I can tell none of the module names are referenced directly
> > outside of the module (apart from resources.module.js of course) so
> > moving the modules shouldn't affect any existing usage in searchlight
> > ui.
> >
> > We could bikeshed this for ages, so if I could just get Rob and Steve
> > to wrestle over it or something, that'd be good. Rob's pretty scrappy.
> >
> >
> >   Richard
> >
> >
> > On 10 March 2017 at 09:56, Richard Jones  wrote:
> >> OK, I will work on a plan that migrates the code into Horizon, thanks
> >> everyone!
> >>
> >> Travis, can the searchlight details page stuff be done through
> >> extending the base resource type in Horizon? If not, is that perhaps a
> >> limitation of the extensible service?
> >>
> >>
> >>  Richard
> >>
> >>
> >> On 10 March 2017 at 02:20, McLellan, Steven 
> >> wrote:
> >>> I concur; option 4 is the only one makes sense to me and was what was
> >>> intended originally. As long as we can do it in one fell swoop in one 
> cyclle
> >>> (preferably sooner than later) there should be no issues.
> >>>
> >>>
> >>>
> >>>
> >>> On 3/9/17, 8:35 AM, "Tripp, Travis S"  wrote:
> >>>
> Let me get Matt B in on this discussion, but basically, option 4 is my
>  initial feeling as Rob stated.
> 
> One downside we saw with this approach is that we weren’t going to be
>  able to take advantage of searchlight capabilities in details pages 
> if
>  everything was in native 

[openstack-dev] [openstack-docs] [tripleo] Creating official Deployment guide for TripleO

2017-03-13 Thread Emilien Macchi
Team,

[adding Alexandra, OpenStack Docs PTL]

It seems like there is a common interest in pushing deployment guides
for different OpenStack Deployment projects: OSA, Kolla.
The landing page is here:
https://docs.openstack.org/project-deploy-guide/newton/

And one example:
https://docs.openstack.org/project-deploy-guide/openstack-ansible/newton/

I think this is pretty awesome and it would bring more visibility for
TripleO project, and help our community to find TripleO documentation
from a consistent place.

The good news, is that openstack-docs team built a pretty solid
workflow to make that happen:
https://docs.openstack.org/contributor-guide/project-deploy-guide.html
And we don't need to create new repos or do any crazy changes. It
would probably be some refactoring and sphinx things.

Alexandra, please add any words if I missed something obvious.

Feedback from the team would be welcome here before we engage any work,

Thanks!
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] - Best release to upgrade from LBaaS v1 to v2

2017-03-13 Thread Michael Johnson
I can confirm that the 1.0.0 release of neutron-lbaas-dashboard is working
on stable/newton.
I included my installation steps in the below linked bug.

As mentioned in the first e-mail.  The instructions say to only install one
of the two files in the enabled directory.  I suspect that is the issue you
are seeing.

Michael


-Original Message-
From: Saverio Proto [mailto:saverio.pr...@switch.ch] 
Sent: Monday, March 13, 2017 5:47 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] - Best release to upgrade from
LBaaS v1 to v2

On 10/03/17 17:49, Michael Johnson wrote:
> Yes, folks have recently deployed the dashboard with success.  I think 
> you had that discussion on the IRC channel, so I won't repeat it here.
> 
> Please note, the neutron-lbaas-dashboard does not support LBaaS v1, 
> you must have LBaaS v2 deployed for the neutron-lbaas-dashboard to 
> work.  If you are trying to use LBaaS v1, you can use the legacy 
> panels included in the older versions of horizon.
> 
[..CUT..]
> If you think there is an open bug for the dashboard, please report it 
> in https://bugs.launchpad.net/neutron-lbaas-dashboard



Hello,
I updated the bug
https://bugs.launchpad.net/neutron-lbaas-dashboard/+bug/1621403

Can anyone clarify the version matrix to use between the horizon version and
the neutron-lbaas-dashboard panels versions ?

can anyone confirm that both files
_1481_project_ng_loadbalancersv2_panel.py and file
_1480_project_loadbalancersv2_panel.py need to be installed ?

Is it okay to use branch master of neutron-lbaas-dashboard with horizon
stable/newton ?

thank you

Saverio
















__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-13 Thread Zane Bitter

On 13/03/17 18:16, Jay Pipes wrote:

Who, specifically, are these gatekeepers of which you speak?


In this case, the Nova & Keystone core teams.

- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-13 Thread Jay Pipes

On 03/13/2017 05:10 PM, Zane Bitter wrote:

On 12/03/17 11:30, Clint Byrum wrote:

Excerpts from Fox, Kevin M's message of 2017-03-11 21:31:40 +:

No, they are treated as second class citizens. Take Trova again as an
example. The underlying OpenStack infrastructure does not provide a
good security solution for Trove's use case. As its more then just
IaaS. So they have spent years trying to work around it on one way or
another, each with horrible trade offs.

For example they could fix an issue by:
1. Run the service vm in the users tenant where it belongs. Though,
currently the user has permissions to reboot the vm, login through
the console and swipe any secrets that are on the vm and make it much
harder for the cloud admin to administer.
2. Run the vm in a "trove" tenant. This fixes the security issue but
breaks the quota model of OpenStack. Users with special host
aggregate access/flavors can't work with this model.

For our site, we can't use Trove at all at the moment, even though we
want to. Because option 2 doesn't work for us, and option 1 currently
has a glaring security flaw in it.

One of the ways I saw Trove try to fix it was to get a feature into
Nova called "Service VM's". VMs owned by the user but not fully
controllable by them but from some other OpenStack service on their
behalf. This, IMO is the right way to solve it. There are a lot of
advanced services that need this functionality. But it seems to have
been rejected, as "users don't need that"... Which is true, only if
you only consider the IaaS use case.



You're right. This type of rejection is not O-K IMO, because this is
consumers of Nova with a real use case, asking for real features that
simply cannot be implemented anywhere except inside Nova. Perhaps the
climate has changed, and this effort can be resurrected.


I don't believe the climate has changed; there's no reason for it have.
Nova is still constrained by the size of the core reviewers team, and
they've been unwilling or unable to take steps (like splitting Nova up
into smaller chunks) that would increase capacity, so they have to
reject as many feature requests as possible. Given that the wider
community has never had a debate about what we're trying to build or for
whom, it's perfectly easy to drift along thinking that the current
priorities are adequate without ever being challenged.

Until we have a TC resolution - with the community consensus to back it
up - that says "the reason for having APIs to your infrastructure is so
that *applications* can use them and projects must make not being an
obstacle to this their highest purpose", or "we're building an open
source AWS, not a free VMWare", or
https://www.youtube.com/watch?v=Vhh_GeBPOhs ... until it's not possible
to say with complete earnestness "OpenStack has VMs, so you can run any
application on it" then the climate will never change, and we'll just
keep hearing "I don't need this, so neither should you".


The problems of these other OpenStack services are being rejected as
second class problems, not primary ones.

I'm sure other sites are avoiding other OpenStack advanced services
for similar reasons. its not just that Operators don't want to deploy
it, or that Users are not asking for it.

Let me try and explain Zane's post in a sligtly different way...
maybe that would help...

So, say you had an operating system. It had the ability to run
arbitrary programs if the user started an executable via the
keyboard/mouse. But had no ability for an executable to start another
executable. How useful would that OS be? There would be no shell
scripts. No non monolithic applications. It would be sort of
functional, but would be hamstrung.

OpenStack is like that today. Like the DOS operating system. Programs
are expected to be pretty self contained and not really talk back to
the Operating System its running on, nor a way to discover other
programs running on the same system. Nor really for a script running
on the Operating System to start other programs, chaining them
together in a way thats more useful then the sum of their parts. The
current view is fine if you need is just a container to run a
traditional OS in. Its not if you are trying to build an application
that spans things.

There have been several attempts at fixing this, in Heat, in Murano,
in the App Catalog, but plumbing they rely on isn't really supportive
of it, as they believe the use case really is just launching a VM
with an OS in it is really the important thing to do, and the job's
done.

For the Applications Catalog to be successful, it needs the
underlying cloud to have enough functionality among a common set of
cloud provided services to allow applications developers to write
cloud software that is redistributable and consumable by the end
user. Its failed because the infrastructure just isn't there. The
other advanced services are suffering from it too.



I'm not sure I agree. One can very simply inject needed credentials
into a running VM 

Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-13 Thread Zane Bitter

On 10/03/17 21:34, Clint Byrum wrote:

(BTW I proposed a workaround for Magnum/Kuryr at the PTG by using a
pre-signed Zaqar URL with a subscription triggering a Mistral workflow,
and I've started working on a POC.)


What triggers the boot to kick over the bucket full of golf balls
though?


It's inside the workflow definition but seriously, I'll be glad when 
people stop making these kinds of comments.


I'm the first person to say that ideally I think we should only need to 
involve Zaqar when the cloud needs to talk to the application, and that 
the application should have the credentials it needs to talk to the 
cloud directly using its APIs.


Notwithstanding that, every application is a special snowflake and there 
a plenty of reasons for applications to want to plug in their own logic 
between cloud services. Mistral is the obvious way to do that. It'd be 
nice to think we could get to a point of talking about two OpenStack 
services integrating with each other using the exact same public APIs 
that they provide to applications and people would just assume that it 
works (or raise bugs) instead of breaking out the Rube Goldberg jokes.


FWIW in AWS it's routine for SNS notifications of events in the cloud to 
trigger Lambda functions that can then make API calls: 
http://docs.aws.amazon.com/lambda/latest/dg/invoking-lambda-function.html#supported-event-source-sns


- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Weekly meeting

2017-03-13 Thread Rob Cresswell
Hey everyone,

Reminder that the weekly IRC meeting is 2000 UTC on Wednesday in 
#openstack-meeting-3. Anyone is welcome to add to the agenda at 
https://wiki.openstack.org/wiki/Meetings/Horizon

Previous logs, ICS files, and other info can be found at 
http://eavesdrop.openstack.org/#Horizon_Team_Meeting

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-13 Thread Davanum Srinivas
Zane,

Sorry for the top post, Can you please submit a TC resolution? I can
help with it as well. Let's test the waters.

Thanks,
Dims

On Mon, Mar 13, 2017 at 5:10 PM, Zane Bitter  wrote:
> On 12/03/17 11:30, Clint Byrum wrote:
>>
>> Excerpts from Fox, Kevin M's message of 2017-03-11 21:31:40 +:
>>>
>>> No, they are treated as second class citizens. Take Trova again as an
>>> example. The underlying OpenStack infrastructure does not provide a good
>>> security solution for Trove's use case. As its more then just IaaS. So they
>>> have spent years trying to work around it on one way or another, each with
>>> horrible trade offs.
>>>
>>> For example they could fix an issue by:
>>> 1. Run the service vm in the users tenant where it belongs. Though,
>>> currently the user has permissions to reboot the vm, login through the
>>> console and swipe any secrets that are on the vm and make it much harder for
>>> the cloud admin to administer.
>>> 2. Run the vm in a "trove" tenant. This fixes the security issue but
>>> breaks the quota model of OpenStack. Users with special host aggregate
>>> access/flavors can't work with this model.
>>>
>>> For our site, we can't use Trove at all at the moment, even though we
>>> want to. Because option 2 doesn't work for us, and option 1 currently has a
>>> glaring security flaw in it.
>>>
>>> One of the ways I saw Trove try to fix it was to get a feature into Nova
>>> called "Service VM's". VMs owned by the user but not fully controllable by
>>> them but from some other OpenStack service on their behalf. This, IMO is the
>>> right way to solve it. There are a lot of advanced services that need this
>>> functionality. But it seems to have been rejected, as "users don't need
>>> that"... Which is true, only if you only consider the IaaS use case.
>>>
>>
>> You're right. This type of rejection is not O-K IMO, because this is
>> consumers of Nova with a real use case, asking for real features that
>> simply cannot be implemented anywhere except inside Nova. Perhaps the
>> climate has changed, and this effort can be resurrected.
>
>
> I don't believe the climate has changed; there's no reason for it have. Nova
> is still constrained by the size of the core reviewers team, and they've
> been unwilling or unable to take steps (like splitting Nova up into smaller
> chunks) that would increase capacity, so they have to reject as many feature
> requests as possible. Given that the wider community has never had a debate
> about what we're trying to build or for whom, it's perfectly easy to drift
> along thinking that the current priorities are adequate without ever being
> challenged.
>
> Until we have a TC resolution - with the community consensus to back it up -
> that says "the reason for having APIs to your infrastructure is so that
> *applications* can use them and projects must make not being an obstacle to
> this their highest purpose", or "we're building an open source AWS, not a
> free VMWare", or https://www.youtube.com/watch?v=Vhh_GeBPOhs ... until it's
> not possible to say with complete earnestness "OpenStack has VMs, so you can
> run any application on it" then the climate will never change, and we'll
> just keep hearing "I don't need this, so neither should you".
>
>>> The problems of these other OpenStack services are being rejected as
>>> second class problems, not primary ones.
>>>
>>> I'm sure other sites are avoiding other OpenStack advanced services for
>>> similar reasons. its not just that Operators don't want to deploy it, or
>>> that Users are not asking for it.
>>>
>>> Let me try and explain Zane's post in a sligtly different way... maybe
>>> that would help...
>>>
>>> So, say you had an operating system. It had the ability to run arbitrary
>>> programs if the user started an executable via the keyboard/mouse. But had
>>> no ability for an executable to start another executable. How useful would
>>> that OS be? There would be no shell scripts. No non monolithic applications.
>>> It would be sort of functional, but would be hamstrung.
>>>
>>> OpenStack is like that today. Like the DOS operating system. Programs are
>>> expected to be pretty self contained and not really talk back to the
>>> Operating System its running on, nor a way to discover other programs
>>> running on the same system. Nor really for a script running on the Operating
>>> System to start other programs, chaining them together in a way thats more
>>> useful then the sum of their parts. The current view is fine if you need is
>>> just a container to run a traditional OS in. Its not if you are trying to
>>> build an application that spans things.
>>>
>>> There have been several attempts at fixing this, in Heat, in Murano, in
>>> the App Catalog, but plumbing they rely on isn't really supportive of it, as
>>> they believe the use case really is just launching a VM with an OS in it is
>>> really the important thing to do, and the job's done.
>>>
>>> For the Applications 

Re: [openstack-dev] [api][qa][tc][nova][cinder] Testing of a microversioned world

2017-03-13 Thread Ken'ichi Ohmichi
2017-03-13 12:23 GMT-07:00 Jim Rollenhagen :
> On Mon, Mar 13, 2017 at 12:58 PM, Chris Friesen
>  wrote:
>>
>> On 03/10/2017 01:37 PM, John Griffith wrote:
>>
>>> Now that micro-versions are *the API versioning scheme to rule them all*
>>> one
>>> question I've not been able to find an answer for is what we're going to
>>> promise
>>> here for support and testing.  My understanding thus far is that the
>>> "community"
>>> approach here is "nothing is ever deprecated, and everything is supported
>>> forever".
>>
>>
>> Nova has so far taken this approach, but there has been talk of bumping
>> the minimum required microversion at every dev gathering.  It hasn't
>> happened yet, but if the support costs of maintaining the compat code
>> becomes too high then it could happen.
>
>
> Indeed. We discussed this at the PTG a bit[0], and plan to use ironic as an
> experiment for this. It's an admin-only API, so the API users should be the
> same (or in contact with) the folks deploying it, and so it shouldn't be as
> surprising. We hope to get some feedback and find out if doing this is as
> terrible as we keep saying.

That seems a nice plan.
The effective scope of bumping minimum microversion could be small and
it would be easier to get feedback from administrators by comparing
normal API consumers.

Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-13 Thread Zane Bitter

On 12/03/17 11:30, Clint Byrum wrote:

Excerpts from Fox, Kevin M's message of 2017-03-11 21:31:40 +:

No, they are treated as second class citizens. Take Trova again as an example. 
The underlying OpenStack infrastructure does not provide a good security 
solution for Trove's use case. As its more then just IaaS. So they have spent 
years trying to work around it on one way or another, each with horrible trade 
offs.

For example they could fix an issue by:
1. Run the service vm in the users tenant where it belongs. Though, currently 
the user has permissions to reboot the vm, login through the console and swipe 
any secrets that are on the vm and make it much harder for the cloud admin to 
administer.
2. Run the vm in a "trove" tenant. This fixes the security issue but breaks the 
quota model of OpenStack. Users with special host aggregate access/flavors can't work 
with this model.

For our site, we can't use Trove at all at the moment, even though we want to. 
Because option 2 doesn't work for us, and option 1 currently has a glaring 
security flaw in it.

One of the ways I saw Trove try to fix it was to get a feature into Nova called "Service 
VM's". VMs owned by the user but not fully controllable by them but from some other OpenStack 
service on their behalf. This, IMO is the right way to solve it. There are a lot of advanced 
services that need this functionality. But it seems to have been rejected, as "users don't 
need that"... Which is true, only if you only consider the IaaS use case.



You're right. This type of rejection is not O-K IMO, because this is
consumers of Nova with a real use case, asking for real features that
simply cannot be implemented anywhere except inside Nova. Perhaps the
climate has changed, and this effort can be resurrected.


I don't believe the climate has changed; there's no reason for it have. 
Nova is still constrained by the size of the core reviewers team, and 
they've been unwilling or unable to take steps (like splitting Nova up 
into smaller chunks) that would increase capacity, so they have to 
reject as many feature requests as possible. Given that the wider 
community has never had a debate about what we're trying to build or for 
whom, it's perfectly easy to drift along thinking that the current 
priorities are adequate without ever being challenged.


Until we have a TC resolution - with the community consensus to back it 
up - that says "the reason for having APIs to your infrastructure is so 
that *applications* can use them and projects must make not being an 
obstacle to this their highest purpose", or "we're building an open 
source AWS, not a free VMWare", or 
https://www.youtube.com/watch?v=Vhh_GeBPOhs ... until it's not possible 
to say with complete earnestness "OpenStack has VMs, so you can run any 
application on it" then the climate will never change, and we'll just 
keep hearing "I don't need this, so neither should you".



The problems of these other OpenStack services are being rejected as second 
class problems, not primary ones.

I'm sure other sites are avoiding other OpenStack advanced services for similar 
reasons. its not just that Operators don't want to deploy it, or that Users are 
not asking for it.

Let me try and explain Zane's post in a sligtly different way... maybe that 
would help...

So, say you had an operating system. It had the ability to run arbitrary 
programs if the user started an executable via the keyboard/mouse. But had no 
ability for an executable to start another executable. How useful would that OS 
be? There would be no shell scripts. No non monolithic applications. It would 
be sort of functional, but would be hamstrung.

OpenStack is like that today. Like the DOS operating system. Programs are 
expected to be pretty self contained and not really talk back to the Operating 
System its running on, nor a way to discover other programs running on the same 
system. Nor really for a script running on the Operating System to start other 
programs, chaining them together in a way thats more useful then the sum of 
their parts. The current view is fine if you need is just a container to run a 
traditional OS in. Its not if you are trying to build an application that spans 
things.

There have been several attempts at fixing this, in Heat, in Murano, in the App 
Catalog, but plumbing they rely on isn't really supportive of it, as they 
believe the use case really is just launching a VM with an OS in it is really 
the important thing to do, and the job's done.

For the Applications Catalog to be successful, it needs the underlying cloud to 
have enough functionality among a common set of cloud provided services to 
allow applications developers to write cloud software that is redistributable 
and consumable by the end user. Its failed because the infrastructure just 
isn't there. The other advanced services are suffering from it too.



I'm not sure I agree. One can very simply inject needed credentials
into a 

Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-03-13 Thread Jiri Tomasek
Hi, I agree that this new updated logo is great refinement to what we
currently have. Love it.

Thanks Heidi

On Mon, Mar 13, 2017 at 8:24 PM, Dan Prince  wrote:

> Hi Heidi,
>
> I like this one a good bit better. He might looks a smidge cross-eyed
> to me... but I'd take this one any day over the previous version.
>
> Thanks for trying to capture the spirit of the original logos.
>
> Dan
>
> On Fri, 2017-03-10 at 08:26 -0800, Heidi Joy Tretheway wrote:
> > Hi TripleO team,
> >
> > Here’s an update on your project logo. Our illustrator tried to be as
> > true as possible to your original, while ensuring it matched the line
> > weight, color palette and style of the rest. We also worked to make
> > sure that three Os in the logo are preserved. Thanks for your
> > patience as we worked on this! Feel free to direct feedback to me.
> >
> > _
> > _
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> > cribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-13 Thread Clint Byrum
Excerpts from Doug Hellmann's message of 2017-03-13 15:12:42 -0400:
> Excerpts from Farr, Kaitlin M.'s message of 2017-03-13 18:55:18 +:
> > Proposed library name: Rename Castellan to oslo.keymanager
> > 
> > Proposed library mission/motivation: Castellan's goal is to provide a
> > generic key manager interface that projects can use for their key
> > manager needs, e.g., storing certificates or generating keys for
> > encrypting data.  The interface passes the commands and Keystone
> > credentials on to the configured back end. Castellan is not a service
> > and does not maintain state. The library can grow to have multiple
> > back ends, as long as the back end can authenticate Keystone
> > credentials.  The only two back end options now in Castellan are
> > Barbican and a limited mock key manager useful only for unit tests.
> > If someone wrote a Keystone auth plugin for Vault, we could also have a
> > Vault back end for Castellan.
> > 
> > The benefit of using Castellan versus using Barbican directly
> > is Castellan allows the option of swapping out for other key managers,
> > mainly for testing.  If projects want their own custom back end for
> > Castellan, they can write a back end that implements the Castellan
> > interface but lives in their own code base, i.e., ConfKeyManager in
> > Nova and Cinder. Additionally, Castellan already has oslo.config
> > options defined which are helpful for configuring the project to talk
> > to Barbican.
> > 
> > When the Barbican team first created the Castellan library, we had
> > reached out to oslo to see if we could name it oslo.keymanager, but the
> > idea was not accepted because the library didn't have enough traction.
> > Now, Castellan is used in many projects, and we thought we would
> > suggest renaming again.  At the PTG, the Barbican team met with the AWG
> > to discuss how we could get Barbican integrated with more projects, and
> > the rename was also suggested at that meeting.  Other projects are
> > interested in creating encryption features, and a rename will help
> > clarify the difference between Barbican and Castellan.
> 
> Can you expand on why you think that is so? I'm not disagreeing with the
> statement, but it's not obviously true to me, either. I vaguely remember
> having it explained at the PTG, but I don't remember the details.
> 

To me, Oslo is a bunch of libraries that encompass "the way OpenStack
does ". When  is key management, projects are, AFAICT, universally
using Castellan at the moment. So I think it fits in Oslo conceptually.

As far as what benefit there is to renaming it, the biggest one is
divesting Castellan of the controversy around Barbican. There's no
disagreement that explicitly handling key management is necessary. There
is, however, still hesitance to fully adopt Barbican in that role. In
fact I heard about some alternatives to Barbican, namely "Vault"[1] and
"Tang"[2], that may be useful for subsets of the community, or could
even grow into de facto standards for key management.

So, given that there may be other backends, and the developers would
like to embrace that, I see value in renaming. It would help, I think,
Castellan's developers to be able to focus on key management and not
have to explain to every potential user "no we're not Barbican's cousin,
we're just an abstraction..".

> > Existing similar libraries (if any) and why they aren't being used: N/A
> > 
> > Reviewer activity: Barbican team
> 
> If the review team is going to be largely the same, I'm not sure I
> see the benefit of changing the ownership of the library. We certainly
> have other examples of Oslo libraries being managed mainly by
> sub-teams made up of folks who primarily focus on other projects.
> oslo.policy and oslo.versionedobjects come to mind, but in both of
> those cases the code was incubated in Oslo or brought into Oslo
> before the tools for managing shared libraries were widely used
> outside of the Oslo team. We now have quite a few examples of project
> teams managing shared libraries (other than their clients).
> 

While this makes sense, I'm not so sure any of those are actually
specifically in the same category as Castellan. Perhaps you can expand
on which libraries have done this, and how they're similar to Castellan?

[1] https://www.vaultproject.io/
[2] https://github.com/latchset/tang

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Where to find list of Newton Murano bugs, and post-Newton commits

2017-03-13 Thread MONTEIRO, FELIPE C
Hi Greg,

You can obtain a list of post-Newton enhancements to Murano here: 
https://docs.openstack.org/releasenotes/murano/

However, Murano doesn’t actively maintain a list of Newton bugs upstream; this 
is a general upstream policy. Instead, bugs that are identified upstream are 
then backported as necessary into earlier releases, like Newton, if applicable. 
Nevertheless, you might find the following query to be useful in finding what 
you’re looking for: 
https://review.openstack.org/#/q/project:openstack/murano+branch:stable/newton+OR+project:openstack/murano+branch:stable/ocata

As for Pike bug fixes, enhancements, etc., see: 
https://review.openstack.org/#/q/project:openstack/murano+branch:master+status:merged

Regards,

Felipe

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: Sunday, March 12, 2017 6:57 PM
To: OpenStack Development Mailing List (not for usage questions) 

Cc: Sun, Yicheng (Jerry) 
Subject: [openstack-dev] [Murano] Where to find list of Newton Murano bugs, and 
post-Newton commits

We have integrated a NEWTON-version of Murano into our OpenStack product.
And are beginning to do some testing of Murano.

Where can I find a current list of bugs that exist in NEWTON-version of MURANO ?
AND
Where can I find a list of post-NEWTON commits to MURANO ?  (i.e. bug fixes, 
enhancements, … towards ocata or pike)

Greg.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] - Best release to upgrade from LBaaS v1 to v2

2017-03-13 Thread Michael Johnson
Hi Saverio,

I did a fresh install today with master versions of both OpenStack and the
neutron-lbaas-dashboard just to make sure the panels are working as
expected.  It went fine.

https://usercontent.irccloud-cdn.com/file/4Zgl9SB3/

To answer your version question:
Stable/mitaka neutron-lbaas-dashboard should work with stable/mitaka and
stable/newton OpenStack
Stable/ocata neutron-lbaas-dashboard works with stable/ocata OpenStack

Per the instructions in the README.rst and on PyPi, ONLY install the
_1481_project_ng_loadbalancersv2_panel.py file.  Do not install both, it
will fail.

I am not sure if you can use the master branch of neutron-lbaas-dashboard
with a newton version of horizon.  This is not a combination we test and/or
support.  It may work.
Someone from the horizon team may have more insights on that, but I think
the best answer is to get it going with the known good combinations and then
to test mixed releases.

I will now start over on stable/newton and test it out.  I will let you know
if I find a problem.

Michael


-Original Message-
From: Saverio Proto [mailto:saverio.pr...@switch.ch] 
Sent: Monday, March 13, 2017 5:47 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] - Best release to upgrade from
LBaaS v1 to v2

On 10/03/17 17:49, Michael Johnson wrote:
> Yes, folks have recently deployed the dashboard with success.  I think 
> you had that discussion on the IRC channel, so I won't repeat it here.
> 
> Please note, the neutron-lbaas-dashboard does not support LBaaS v1, 
> you must have LBaaS v2 deployed for the neutron-lbaas-dashboard to 
> work.  If you are trying to use LBaaS v1, you can use the legacy 
> panels included in the older versions of horizon.
> 
[..CUT..]
> If you think there is an open bug for the dashboard, please report it 
> in https://bugs.launchpad.net/neutron-lbaas-dashboard



Hello,
I updated the bug
https://bugs.launchpad.net/neutron-lbaas-dashboard/+bug/1621403

Can anyone clarify the version matrix to use between the horizon version and
the neutron-lbaas-dashboard panels versions ?

can anyone confirm that both files
_1481_project_ng_loadbalancersv2_panel.py and file
_1480_project_loadbalancersv2_panel.py need to be installed ?

Is it okay to use branch master of neutron-lbaas-dashboard with horizon
stable/newton ?

thank you

Saverio
















__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-03-13 Thread Jim Rollenhagen
On Fri, Mar 10, 2017 at 11:28 AM, Heidi Joy Tretheway <
heidi...@openstack.org> wrote:

> Hi Ironic team,
> Here’s an update on your project logo. Our illustrator tried to be as true
> as possible to your original, while ensuring it matched the line weight,
> color palette and style of the rest. Thanks for your patience as we worked
> on this! Feel free to direct feedback to me; we really want to get this
> right for you.
>

This is fantastic! Thank you for putting up with us, I think it turned out
well in the end.

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-03-13 Thread Dan Prince
Hi Heidi,

I like this one a good bit better. He might looks a smidge cross-eyed
to me... but I'd take this one any day over the previous version.

Thanks for trying to capture the spirit of the original logos.

Dan

On Fri, 2017-03-10 at 08:26 -0800, Heidi Joy Tretheway wrote:
> Hi TripleO team, 
> 
> Here’s an update on your project logo. Our illustrator tried to be as
> true as possible to your original, while ensuring it matched the line
> weight, color palette and style of the rest. We also worked to make
> sure that three Os in the logo are preserved. Thanks for your
> patience as we worked on this! Feel free to direct feedback to me.
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] this week's priorities and subteam reports

2017-03-13 Thread Loo, Ruby
Hi,

We are meditative to present this week's priorities and subteam report for 
Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and 
formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. "standalone" (without nova) tests: https://review.openstack.org/#/c/423556/
2. get multi-node grenade job up and running.
2.1. Backports: https://review.openstack.org/#/c/444950/ & 
https://review.openstack.org/#/c/444944/ need to land
3. review/land next BFV patch: https://review.openstack.org/#/c/355625/
4. update/review IPA API versioning spec: 
https://review.openstack.org/#/c/341086/
5. redfish driver: https://review.openstack.org/#/c/438982/
6. review e-tags spec: https://review.openstack.org/#/c/381991/


Bugs (dtantsur, mjturek)

- Stats (diff between 06 Mar 2017 and 13 Mar 2017)
- Ironic: 236 bugs (+4) + 246 wishlist items (+3). 14 new, 196 in progress 
(+5), 0 critical, 29 high (+3) and 30 incomplete (-2)
- Inspector: 16 bugs + 28 wishlist items (+2). 3 new (+1), 15 in progress (-1), 
0 critical, 1 high and 4 incomplete
- Nova bugs with Ironic tag: 13. 2 new, 0 critical, 0 high

Essential Priorities


CI refactoring and missing test coverage

- Standalone CI tests (vsaienk0)
- patch on review https://review.openstack.org/#/c/423556/
- Missing test coverage (all)
- portgroups and attach/detach tempest tests: 
https://review.openstack.org/382476

Generic boot-from-volume (TheJulia, dtantsur)
-
* trello: https://trello.com/c/UttNjDB7/13-generic-boot-from-volume
- status as of most recent weekly meeting:
- Weekly meeting now established on Thursdays at 1600 UTC in 
#openstack-meeting-5
- http://eavesdrop.openstack.org/#Ironic_Boot_from_Volume_meeting
- Initial meeting was held last week between joanna and TheJulia. Basic 
context and information sharing/coordination.
- API side changes for volume connector information has a procedural -2 
until we can begin making use of the data in the conductor, but should stil be 
reviewed
- https://review.openstack.org/#/c/214586/
- This change has been rebased on top of the iPXE template update 
revision to support cinder/iscsi booting.
- Boot from volume/storage cinder interface is up for review
- Patch series is in need of being updataed, and validated against the 
data model for volume connection information. //Base cinder interface was 
updated and validated against volume object usage in nova and cinder itself
- 
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1559691
- Original volume connection information client patches
- These changes should be expected to land once Pike opens.
- 
https://review.openstack.org/#/q/status:open+project:openstack/python-ironicclient+branch:master+topic:bug/1526231
- we cannot land these until we land Ironic API bits

Rolling upgrades and grenade-partial (rloo, jlvillal)
-
* trello: 
https://trello.com/c/GAlhSzLm/2-rolling-upgrades-and-grenade-with-multi-node
- status as of most recent weekly meeting:
- patch ready for review: https://review.openstack.org/#/c/407491/
- rest of patches will be updated this week: 
https://review.openstack.org/#/q/topic:bug/1526283
- Testing work:
- 13-Mar-2017: Multi-node + multi-tenant + grenade job is passing with 
patches
- We are almost done :) Hope to have it working before end-of-week
- Two backports need to be merged:
- https://review.openstack.org/444944 stable/ocata
- https://review.openstack.org/444950 stable/newton
- After backports are done need patch to openstack-infra/project-config 
to land:
- https://review.openstack.org/443348
- Will need to move job from experimental to non-voting

Reference architecture guide (jroll)

- have been hacking on a devstack setup to explore some of the kvm/ironic 
interactions

Driver composition (dtantsur, jroll)

* trello: https://trello.com/c/fTya14y6/14-driver-composition
- gerrit topic: https://review.openstack.org/#/q/status:open+topic:bug/1524745
- status as of most recent weekly meeting:
- TODO as of 6 Mar 2017
- install guide / admin guide docs
- client changes:
- driver commands update: https://review.openstack.org/419274
- node-update update: https://review.openstack.org/#/c/431542/
- new hardware types:
- ilo: https://review.openstack.org/#/c/439404/
- contentious topics:
- what to do about driver properties API and dynamic drivers?
- rloo and dtantsur started brainstorming: 

Re: [openstack-dev] [api][qa][tc][nova][cinder] Testing of a microversioned world

2017-03-13 Thread Jim Rollenhagen
On Mon, Mar 13, 2017 at 12:58 PM, Chris Friesen  wrote:

> On 03/10/2017 01:37 PM, John Griffith wrote:
>
> Now that micro-versions are *the API versioning scheme to rule them all*
>> one
>> question I've not been able to find an answer for is what we're going to
>> promise
>> here for support and testing.  My understanding thus far is that the
>> "community"
>> approach here is "nothing is ever deprecated, and everything is supported
>> forever".
>>
>
> Nova has so far taken this approach, but there has been talk of bumping
> the minimum required microversion at every dev gathering.  It hasn't
> happened yet, but if the support costs of maintaining the compat code
> becomes too high then it could happen.
>

Indeed. We discussed this at the PTG a bit[0], and plan to use ironic as an
experiment for this. It's an admin-only API, so the API users should be the
same (or in contact with) the folks deploying it, and so it shouldn't be as
surprising. We hope to get some feedback and find out if doing this is as
terrible as we keep saying.

// jim

[0] line 146 here:
https://etherpad.openstack.org/p/ptg-architecture-workgroup
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-13 Thread Davanum Srinivas
Kaitlin,

On Mon, Mar 13, 2017 at 2:55 PM, Farr, Kaitlin M.
 wrote:
> Proposed library name: Rename Castellan to oslo.keymanager
>
>
>
> Proposed library mission/motivation: Castellan’s goal is to provide a
>
> generic key manager interface that projects can use for their key
>
> manager needs, e.g., storing certificates or generating keys for
>
> encrypting data.  The interface passes the commands and Keystone
>
> credentials on to the configured back end. Castellan is not a service
>
> and does not maintain state. The library can grow to have multiple
>
> back ends, as long as the back end can authenticate Keystone
>
> credentials.  The only two back end options now in Castellan are
>
> Barbican and a limited mock key manager useful only for unit tests.
>
> If someone wrote a Keystone auth plugin for Vault, we could also have a
>
> Vault back end for Castellan.
>
>
>
> The benefit of using Castellan versus using Barbican directly
>
> is Castellan allows the option of swapping out for other key managers,
>
> mainly for testing.  If projects want their own custom back end for
>
> Castellan, they can write a back end that implements the Castellan
>
> interface but lives in their own code base, i.e., ConfKeyManager in
>
> Nova and Cinder. Additionally, Castellan already has oslo.config
>
> options defined which are helpful for configuring the project to talk
>
> to Barbican.
>
>
>
> When the Barbican team first created the Castellan library, we had
>
> reached out to oslo to see if we could name it oslo.keymanager, but the
>
> idea was not accepted because the library didn’t have enough traction.
>
> Now, Castellan is used in many projects, and we thought we would
>
> suggest renaming again.  At the PTG, the Barbican team met with the AWG
>
> to discuss how we could get Barbican integrated with more projects, and
>
> the rename was also suggested at that meeting.  Other projects are
>
> interested in creating encryption features, and a rename will help
>
> clarify the difference between Barbican and Castellan.
>
>
>
> Existing similar libraries (if any) and why they aren't being used: N/A
>
>
>
> Reviewer activity: Barbican team
>
>
>
> Who is going to use this (project involvement): Cinder, Nova, Sahara,
>
> and Glance already use Castellan, Swift has a patch that integrates
>
> Castellan.
>
>
>
> Proposed adoption model/plan: The Castellan library was already created
>
> and produces a functional and useful artifact (a pypi release) and is
>
> integrated into various OpenStack projects and now it is proposed that
>
> the library be moved into the Oslo group's namespace by creating a fork
>
> of Castellan, clean up a few things, create a new oslo.keymanager
>
> release on pypi, and change the projects to use oslo.keymanager.
>

Is the idea that the name change (oslo) will help drive the adoption?

Also, Is the a default backend for say devstack going to be barbican?
Is there a plan to do something else (say a vault based backend) for
very simple scenarios?

>
> Thanks,
>
>
>
> Kaitlin Farr
>
> Software Engineer
>
> The Johns Hopkins University Applied Physics Laboratory
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-13 Thread Doug Hellmann
Excerpts from Farr, Kaitlin M.'s message of 2017-03-13 18:55:18 +:
> Proposed library name: Rename Castellan to oslo.keymanager
> 
> Proposed library mission/motivation: Castellan's goal is to provide a
> generic key manager interface that projects can use for their key
> manager needs, e.g., storing certificates or generating keys for
> encrypting data.  The interface passes the commands and Keystone
> credentials on to the configured back end. Castellan is not a service
> and does not maintain state. The library can grow to have multiple
> back ends, as long as the back end can authenticate Keystone
> credentials.  The only two back end options now in Castellan are
> Barbican and a limited mock key manager useful only for unit tests.
> If someone wrote a Keystone auth plugin for Vault, we could also have a
> Vault back end for Castellan.
> 
> The benefit of using Castellan versus using Barbican directly
> is Castellan allows the option of swapping out for other key managers,
> mainly for testing.  If projects want their own custom back end for
> Castellan, they can write a back end that implements the Castellan
> interface but lives in their own code base, i.e., ConfKeyManager in
> Nova and Cinder. Additionally, Castellan already has oslo.config
> options defined which are helpful for configuring the project to talk
> to Barbican.
> 
> When the Barbican team first created the Castellan library, we had
> reached out to oslo to see if we could name it oslo.keymanager, but the
> idea was not accepted because the library didn't have enough traction.
> Now, Castellan is used in many projects, and we thought we would
> suggest renaming again.  At the PTG, the Barbican team met with the AWG
> to discuss how we could get Barbican integrated with more projects, and
> the rename was also suggested at that meeting.  Other projects are
> interested in creating encryption features, and a rename will help
> clarify the difference between Barbican and Castellan.

Can you expand on why you think that is so? I'm not disagreeing with the
statement, but it's not obviously true to me, either. I vaguely remember
having it explained at the PTG, but I don't remember the details.

> Existing similar libraries (if any) and why they aren't being used: N/A
> 
> Reviewer activity: Barbican team

If the review team is going to be largely the same, I'm not sure I
see the benefit of changing the ownership of the library. We certainly
have other examples of Oslo libraries being managed mainly by
sub-teams made up of folks who primarily focus on other projects.
oslo.policy and oslo.versionedobjects come to mind, but in both of
those cases the code was incubated in Oslo or brought into Oslo
before the tools for managing shared libraries were widely used
outside of the Oslo team. We now have quite a few examples of project
teams managing shared libraries (other than their clients).

> Who is going to use this (project involvement): Cinder, Nova, Sahara,
> and Glance already use Castellan, Swift has a patch that integrates
> Castellan.
> 
> Proposed adoption model/plan: The Castellan library was already created
> and produces a functional and useful artifact (a pypi release) and is
> integrated into various OpenStack projects and now it is proposed that
> the library be moved into the Oslo group's namespace by creating a fork
> of Castellan, clean up a few things, create a new oslo.keymanager
> release on pypi, and change the projects to use oslo.keymanager.
> 
> Thanks,
> 
> Kaitlin Farr
> Software Engineer
> The Johns Hopkins University Applied Physics Laboratory

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-13 Thread Farr, Kaitlin M.
Proposed library name: Rename Castellan to oslo.keymanager

Proposed library mission/motivation: Castellan's goal is to provide a
generic key manager interface that projects can use for their key
manager needs, e.g., storing certificates or generating keys for
encrypting data.  The interface passes the commands and Keystone
credentials on to the configured back end. Castellan is not a service
and does not maintain state. The library can grow to have multiple
back ends, as long as the back end can authenticate Keystone
credentials.  The only two back end options now in Castellan are
Barbican and a limited mock key manager useful only for unit tests.
If someone wrote a Keystone auth plugin for Vault, we could also have a
Vault back end for Castellan.

The benefit of using Castellan versus using Barbican directly
is Castellan allows the option of swapping out for other key managers,
mainly for testing.  If projects want their own custom back end for
Castellan, they can write a back end that implements the Castellan
interface but lives in their own code base, i.e., ConfKeyManager in
Nova and Cinder. Additionally, Castellan already has oslo.config
options defined which are helpful for configuring the project to talk
to Barbican.

When the Barbican team first created the Castellan library, we had
reached out to oslo to see if we could name it oslo.keymanager, but the
idea was not accepted because the library didn't have enough traction.
Now, Castellan is used in many projects, and we thought we would
suggest renaming again.  At the PTG, the Barbican team met with the AWG
to discuss how we could get Barbican integrated with more projects, and
the rename was also suggested at that meeting.  Other projects are
interested in creating encryption features, and a rename will help
clarify the difference between Barbican and Castellan.

Existing similar libraries (if any) and why they aren't being used: N/A

Reviewer activity: Barbican team

Who is going to use this (project involvement): Cinder, Nova, Sahara,
and Glance already use Castellan, Swift has a patch that integrates
Castellan.

Proposed adoption model/plan: The Castellan library was already created
and produces a functional and useful artifact (a pypi release) and is
integrated into various OpenStack projects and now it is proposed that
the library be moved into the Oslo group's namespace by creating a fork
of Castellan, clean up a few things, create a new oslo.keymanager
release on pypi, and change the projects to use oslo.keymanager.

Thanks,

Kaitlin Farr
Software Engineer
The Johns Hopkins University Applied Physics Laboratory
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Small steps for Go

2017-03-13 Thread Clint Byrum
Excerpts from Davanum Srinivas's message of 2017-03-13 14:32:16 -0400:
> Clint,
> 
> There's some discussion on the etherpad, don't want to move that here.
> 

Ok. I don't see much that explains why. But either way, IMO this is
why an etherpad is a really bad place to have a discussion. Great for
recording notes _during_ a discussion. But it's not like I have a log
of what was said when by who.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] - driver capability/feature verification framework

2017-03-13 Thread Robert Kukura

Hi Kevin,

I will file the RFE this week.

-Bob


On 3/13/17 2:05 PM, Kevin Benton wrote:

Hi,

At the PTG we briefly discussed a generic system for verifying that 
the appropriate drivers are enforcing a particular user-requested 
feature in ML2 (e.g. security groups, qos, etc).


Is someone planning on working on this for Pike? If so, can you please 
file an RFE so we can prioritize it appropriately? We have to decide 
if we are going to block features based on the enforcement by this 
framework.


Cheers,
Kevin Benton


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Small steps for Go

2017-03-13 Thread Davanum Srinivas
Clint,

There's some discussion on the etherpad, don't want to move that here.

-- Dims

On Mon, Mar 13, 2017 at 1:44 PM, Clint Byrum  wrote:
> Excerpts from Davanum Srinivas's message of 2017-03-13 10:06:30 -0400:
>> Update:
>>
>> * We have a new git repo (EMPTY!) for the commons work -
>> http://git.openstack.org/cgit/openstack/golang-commons/
>> * The golang-client has little code, but lot of potential -
>> https://git.openstack.org/cgit/openstack/golang-client/
>>
>
> So, we're going to pretend gophercloud doesn't exist and continue to
> isolate ourselves from every other community?
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] propose Alex Schultz core on tripleo-heat-templates

2017-03-13 Thread John Trowbridge


On 03/13/2017 10:30 AM, Emilien Macchi wrote:
> Hi,
> 
> Alex is already core on instack-undercloud and puppet-tripleo.

+1 it is actually a bit odd to be +2 on puppet-tripleo without being +2
on THT, since so many changes span the two repos.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-13 Thread Clint Byrum
I know it happened a few messages up in the thread, but you may have
forgotten where I said that we should resurrect the idea of having
automatic instance-users as a feature of Nova (and even to add this to
Shade/Oaktree, so people could add it to existing OpenStack clouds
before they roll out the version that has it supported natively.

My only point there was that currently there is a workaround available,
and it wouldn't be that alien to most users of clouds.

Excerpts from Fox, Kevin M's message of 2017-03-13 14:31:10 +:
> So for cloud apps, you must have a config management system to do secure 
> secret transport? A cloud app developer must target which config management 
> tool? How do you know what the end user sites are running? Do you have to 
> support all of them? Config management tooling is very religious, so if you 
> don't pick the right one, folks will shun your app. Thats way too burdensome 
> on app developers and users to require.
> 
> Instead, look at how aws does creds, and k8s. Any vm can just request a fresh 
> token. In k8s, a fresh token is injected into the container automatically. 
> This makes it extremely easy to deal with. This is what app developers are 
> expecting now and openstack is turning folks away when we keep pushing a lot 
> of work their way.
> 
> Openstacks general solution for any hard problem seems to be, make the 
> operator, app dev, or user deal with it, not openstack.
> 
> Thats ok if those folks have no where else to go. They do now, and are 
> starting to do so as there are more options.
> 
> Openstack needs to abandon that phylosophy. It can no longer afford it.
> 
> Thanks,
> Kevin
> 
> 
> From: Clint Byrum
> Sent: Sunday, March 12, 2017 10:30:49 AM
> To: openstack-dev
> Subject: Re: [openstack-dev] [tc][appcat] The future of the App Catalog
> 
> Excerpts from Fox, Kevin M's message of 2017-03-12 16:54:20 +:
> > I totally agree that policy management is a major problem too. A much 
> > bigger one then instance users, and something I was hoping to get to after 
> > instance users, but never made it past the easier of the two. :/
> >
> >
> > The just inject creds solution keeps getting proposed, but only looks at 
> > the surface of the issue so has a lot of issues under the hood. Lets dig in 
> > again.
> >
> > Lets say I create a heat template and inject creds through Parameters, as 
> > that is the natural place for a user to fill out settings and launch their 
> > application.
> >
> > The cred is loaded unencrypted into the heat database. Then heat-engine 
> > pushes it into the nova database where it resides unencrypted, so it can be 
> > sent to cloud init, usually also in an unencrypted form.
> >
> > You delete the heat stack, and the credential still sticks around in the 
> > nova database long after the vm is deleted, as it keeps deleted data.
> >
> > The channels for passing stuff to a vm are much better at passing config to 
> > the vm, not secrets.
> >
> > Its also a one shot way to get an initial cred to a vm, but not a way to 
> > update it should the need arise. Also, how is the secret maintained in a 
> > way that rebooting the vm works while snapshotting the vm doesn't capture 
> > the secret, etc.
> >
> > The use case/issues are described exhaustively in the spec and describe why 
> > its not something thats something that can easily be tackled by "just do X" 
> > solutions. I proposed one implementation I think will work generally and 
> > cover all bases. But am open to other implementations that cover all the 
> > bases. Many half solutions have been proposed, but the whole point is 
> > security, so a half solution that has big security holes in it isn't really 
> > a solution.
> >
> 
> _OR_, you inject a nonce that is used to authenticate the instance to
> config management. If you're ever going to talk to anything outside of
> the cloud APIs, you'll need this anyway.
> 
> Once you're involved with config management you are already sending
> credentials of various types to your instances.
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What do we want to be when we grow up?

2017-03-13 Thread Monty Taylor
On 03/10/2017 11:39 PM, Joshua Harlow wrote:
> 
> * Interoperability - kept as is (though I can't really say how many
> public clouds there are anymore to interoperate with).

There are plenty. As Dan Smith will tell you, I'm fond of telling people
just how  many OpenStack Public Cloud accounts I have.

This:

https://docs.openstack.org/developer/os-client-config/vendor-support.html

Is a non-comprehensive list. I continue to find new ones I didn't know
about. I also haven't yet gotten an account on Teuto.net to verify it.

There is also this:

https://www.openstack.org/marketplace/public-clouds/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Containers] Containers work tracking, updates and schedule

2017-03-13 Thread Dan Prince
On Tue, 2017-03-07 at 12:08 +0100, Flavio Percoco wrote:
> Greetings,
> 
> We've been organizing the containers work for TripleO in a more
> consumable way
> that will hopefully ease the engagement from different squads and
> teams in
> OpenStack.

For reference as a lot of the initial services were completed as part
of the undercloud we had been tracking that upstream already here as
well:

https://etherpad.openstack.org/p/tripleo-composable-containers-underclo
ud

A lot of the underpinnings that enable our docker approach with t-h-t
were in that slightly older etherpad revision.

Dan

> 
> The result of this work is all in this etherpad[0], which we'll use
> as a central
> place to keep providing updates and collecting information about the
> containers
> effort. The etherpad is organized as follows:
> 
> * One section defining the goals of the effort and its evolution
> * One section listing bugs that are critical (in addition to the link
> querying
>   all the bugs tagged as `containers`)
> * RDO Tasks are tasks specific for the RDO upstream community (like
> working on a
>   pipeline to build containers)
> * One section dedicated to CI's schedule. Tasks we've pending and
> when they
>   should be completed.
> * One section for general overcloud tasks grouped by milestone
> * One section for review links. This section has been split into
> smaller groups
>   to make reviews easier.
> * One section with the list of services that still have to be
> containerized
> 
> We'll keeping this etherpad updated but we'll be also providing
> updates to the
> mailing list with more frequency.
> 
> [0] https://etherpad.openstack.org/p/tripleo-composable-containers-ov
> ercloud
> 
> Let us know if you need anything,
> Flavio
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ml2] - driver capability/feature verification framework

2017-03-13 Thread Kevin Benton
Hi,

At the PTG we briefly discussed a generic system for verifying that the
appropriate drivers are enforcing a particular user-requested feature in
ML2 (e.g. security groups, qos, etc).

Is someone planning on working on this for Pike? If so, can you please file
an RFE so we can prioritize it appropriately? We have to decide if we are
going to block features based on the enforcement by this framework.

Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] A sneak peek to TripleO + Containers

2017-03-13 Thread Dan Prince
On Mon, 2017-02-13 at 11:46 +0100, Flavio Percoco wrote:
> Hello,
> 
> I've been playing with a self-installing container for the
> containerized TripleO
> undercloud and I thought I'd share some of the progress we've made so
> far.
> 
> This is definitely not at its final, ideal, state but I wanted to
> provide a
> sneak peek to what is coming and what the updates/content of the
> TripleO+Containers sessions will be next week at the PTG.
> 
> The image[0] shows the output of[1] after running the containerized
> composable
> undercloud deployment using a self-installing container[2]. Again,
> this is not
> stable and it still needs work. You can see in the screenshot that
> one of the
> neutron's agent failed and from the repo[3] that I'm using the
> scripts we've
> been using for development instead of using oooq or something like
> that. One
> interesting thing is that running[2] will leave you with an almost
> entirely
> clean host. It still writes some stuff in `/var/lib` and
> `/etc/puppet` but that
> can be improved for sure.
> 
> Anyway, after all the disclaimers, I hope you'll be able to
> appreciate the
> progress we've made. Dan Prince has been able to deploy an overcloud
> on top of
> the containerized undercloud already, which is great news.

I've been tracking the progress on the composable undercloud since
January upstream here too fwiw:

https://etherpad.openstack.org/p/tripleo-composable-containers-underclo
ud

Dan

> 
> [0] http://imgur.com/a/Mol28
> [1] docker ps -a --filter label=managed_by=docker-cmd
> [2] docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -ti
> flaper87/tripleo-undercloud-init-container
> [3] https://github.com/flaper87/tripleo-undercloud-init-container
> 
> Enjoy,
> Flavio
> 
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Small steps for Go

2017-03-13 Thread Clint Byrum
Excerpts from Davanum Srinivas's message of 2017-03-13 10:06:30 -0400:
> Update:
> 
> * We have a new git repo (EMPTY!) for the commons work -
> http://git.openstack.org/cgit/openstack/golang-commons/
> * The golang-client has little code, but lot of potential -
> https://git.openstack.org/cgit/openstack/golang-client/
> 

So, we're going to pretend gophercloud doesn't exist and continue to
isolate ourselves from every other community?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][kolla][openstack-helm]k8s-sig-openstack meeting

2017-03-13 Thread Chris Hoge
Of course, US daylight savings bug bit. Please consider 18:30 UTC
the official time. We will work out scheduling at the meeting.

> On Mar 13, 2017, at 10:36 AM, Chris Hoge  wrote:
> 
> Tomorrow, March 14 at 18:30 UTC/10:30 PT, we will be holding the new
> extension to the k8s-sig-openstack meeting. This bi-weekly meeting
> will focus on Kubernetes as an deployment platform for OpenStack.
> This includes using Helm for orchestration.
> 
> For this meeting, use the zoom id: https://zoom.us/j/3843257457
> And the etherpad: https://etherpad.openstack.org/p/openstack-helm
> 
> On the agenda will be the formalization of this meeting.
> 
> Thanks,
> Chris
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [deployment][kolla][openstack-helm]k8s-sig-openstack meeting

2017-03-13 Thread Chris Hoge
Tomorrow, March 14 at 18:30 UTC/10:30 PT, we will be holding the new
extension to the k8s-sig-openstack meeting. This bi-weekly meeting
will focus on Kubernetes as an deployment platform for OpenStack.
This includes using Helm for orchestration.

For this meeting, use the zoom id: https://zoom.us/j/3843257457
And the etherpad: https://etherpad.openstack.org/p/openstack-helm

On the agenda will be the formalization of this meeting.

Thanks,
Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [notification] BlockDeviceMapping in InstancePayload

2017-03-13 Thread Balazs Gibizer

Hi,

As part of the Searchlight integration we need to extend our instance 
notifications with BDM data [1]. As far as I understand the main goal 
is to provide enough data about the instance to Searchlight so that 
Nova can use Searchlight to generate the response of the GET 
/servers/{server_id} requests based on the data stored in Searchlight.


I checked the server API response and I found one field that needs BDM 
related data: os-extended-volumes:volumes_attached. Only the uuid of 
the volume and the value of delete_on_terminate is provided in the API 
response.


I have two options about what to add to the InstancePayload and I want 
to get some opinions about which direction we should go with the 
implementation.


Option A: Add only the minimum required information from the BDM to the 
InstancePayload


 additional InstancePayload field:
 block_devices: ListOfObjectsField(BlockDevicePayload)

 class BlockDevicePayload(base.NotificationPayloadBase):
   fields = {
   'delete_on_termination': fields.BooleanField(default=False),
   'volume_id': fields.StringField(nullable=True),
   }

This payload would be generated from the BDMs connected to the instance 
where the BDM.destination_type == 'volume'.



Option B: Provide a comprehensive set of BDM attributes

 class BlockDevicePayload(base.NotificationPayloadBase):
   fields = {
   'source_type': fields.BlockDeviceSourceTypeField(nullable=True),
   'destination_type': fields.BlockDeviceDestinationTypeField(
   nullable=True),
   'guest_format': fields.StringField(nullable=True),
   'device_type': fields.BlockDeviceTypeField(nullable=True),
   'disk_bus': fields.StringField(nullable=True),
   'boot_index': fields.IntegerField(nullable=True),
   'device_name': fields.StringField(nullable=True),
   'delete_on_termination': fields.BooleanField(default=False),
   'snapshot_id': fields.StringField(nullable=True),
   'volume_id': fields.StringField(nullable=True),
   'volume_size': fields.IntegerField(nullable=True),
   'image_id': fields.StringField(nullable=True),
   'no_device': fields.BooleanField(default=False),
   'tag': fields.StringField(nullable=True)
   }

In this case Nova would provide every BDM attached to the instance not 
just the volume ones.


I intentionally left out connection_info and the db id as those seems 
really system internal.
I also left out the instance related references as this 
BlockDevicePayload would be part of an InstancePayload which has an the 
instance uuid already.


What do you think, which direction we should go?

Cheers,
gibi


[1] 
https://blueprints.launchpad.net/nova/+spec/additional-notification-fields-for-searchlight



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][nova][cinder] Testing of a microversioned world

2017-03-13 Thread Chris Friesen

On 03/10/2017 01:37 PM, John Griffith wrote:


Now that micro-versions are *the API versioning scheme to rule them all* one
question I've not been able to find an answer for is what we're going to promise
here for support and testing.  My understanding thus far is that the "community"
approach here is "nothing is ever deprecated, and everything is supported 
forever".


Nova has so far taken this approach, but there has been talk of bumping the 
minimum required microversion at every dev gathering.  It hasn't happened yet, 
but if the support costs of maintaining the compat code becomes too high then it 
could happen.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][release][deployment] Packaging problems due to branch/release ordering

2017-03-13 Thread Alan Pevec
2017-03-13 15:49 GMT+01:00 Doug Hellmann :
...
> We test this upgrade scenario in the upstream CI, too. The difference
> is that grenade can tell pip "install exactly the version I am
> pointing to in this directory on disk," rather than relying on
> version numbers to notice that an upgrade is needed (or should be
> avoided, as the case may be).
>
> Is it possible to do that with system packages in some way, too, without
> pinning package versions in puppet or tripleo?

That would not test the regular update flow, we'd have to use
workaround for the period before milestone 1
and run explicit yum downgrade with the explicit package list to force
lower version packages to install.
That means real upgrades would not be tested until this kludge is
removed after first milestone is released.
Instead, I'll look at adding automatic bump in RDO Trunk builder
before milestone1 i.e. when it detects master version is lower than
stable branches.

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][release][deployment] Packaging problems due to branch/release ordering

2017-03-13 Thread Alan Pevec
2017-03-09 14:58 GMT+01:00 Jeremy Stanley :
> In the past we addressed this by automatically merging the release
> tag back into master, but we stopped doing that a cycle ago because
> it complicated release note generation.

Also this was including RC >= 2 and final tags so as soon as the first
stable maintenance version was released, master was again lower
version.

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] FYI: Making changes required for Ironic multi-node grenade to work

2017-03-13 Thread Villalovos, John L
In order for Ironic multi-node grenade testing work we have to correct a 
mistake we (well really I) made.

We need to remove the enabling of the ironic plugin in our 
devstack/upgrade/settings file and move it to openstack-infra/project-config. 
As enabling the plugin in that file works for single-node gate jobs but fails 
for multi-node as the sub-node does not get enabled.

The following patches are proposed to do this:

Only enable the ironic devstack plugin in grenade if it is not yet enabled 
https://review.openstack.org/#/c/444335/

Backport and merge to stable/ocata (https://review.openstack.org/444944) and 
stable/newton (https://review.openstack.org/444950)

Change project-config to add 'enable_plugin ironic' to the grenade jobs. 
https://review.openstack.org/#/c/443348/

Remove all enabling of the ironic devstack plugin in the grenade settings file: 
https://review.openstack.org/#/c/444509/

Profit...


This issue was cause by my cargo culting devstack/upgrade/settings from another 
project.

Thank you,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-03-13 Thread Julia Kreger
I love it! +1

Thank you!

-Julia

On Fri, Mar 10, 2017 at 11:28 AM, Heidi Joy Tretheway <
heidi...@openstack.org> wrote:

> Hi Ironic team,
> Here’s an update on your project logo. Our illustrator tried to be as true
> as possible to your original, while ensuring it matched the line weight,
> color palette and style of the rest. Thanks for your patience as we worked
> on this! Feel free to direct feedback to me; we really want to get this
> right for you.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-03-13 Thread John Villalovos
+1. Reminds me a lot of our current logo. I like this version the best of
the official stylized logos. Thanks.

On Fri, Mar 10, 2017 at 8:28 AM, Heidi Joy Tretheway  wrote:

> Hi Ironic team,
> Here’s an update on your project logo. Our illustrator tried to be as true
> as possible to your original, while ensuring it matched the line weight,
> color palette and style of the rest. Thanks for your patience as we worked
> on this! Feel free to direct feedback to me; we really want to get this
> right for you.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] propose Alex Schultz core on tripleo-heat-templates

2017-03-13 Thread Carlos Camacho Gonzalez
+1

On Mon, Mar 13, 2017 at 4:12 PM, Jiří Stránský  wrote:

> On 13.3.2017 15:30, Emilien Macchi wrote:
>
>> Hi,
>>
>> Alex is already core on instack-undercloud and puppet-tripleo.
>> His involvement and knowledge in TripleO Heat Templates has been very
>> appreciated over the last months and I think we can give him +2 on
>> this project.
>>
>> As usual, feel free to vote -1/+1 on this proposal.
>>
>
> +1
>
>
>> Thanks,
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Neutron API and WSGI

2017-03-13 Thread Kees Meijs
Hi list,

Not being sure if this is the right place to ask: could someone please
give me some pointers on Neutron API and WSGI (it seems Pecan is used)?

I'm currently running Neutron from Liberty (2:7.2.0-0ubuntu1~cloud1) and
I'm try to start Neutron API using uWSGI, but unfortunately without any
success yet. I was able to run Nova API, Keystone and Horizon this way
with an Nginx proxy in front.

Example (relevant part):
> # cat /etc/uwsgi/apps-enabled/nova-api.ini
> 
>
> name = nova-api
> uid = nova
> gid = www-data
>
> chdir = /var/lib/nova
> wsgi-file = /usr/lib/python2.7/dist-packages/nova/wsgi/nova-api.py

Works like a charm! But, "just starting"
/usr/lib/python2.7/dist-packages/neutron/server/wsgi_pecan.py gives me
no luck, so it seems I might need a different or custom WSGI script.

Thank you very much in advance!

Best regards,
Kees


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [monasca] future of thresholder on Storm

2017-03-13 Thread Brandt, Ryan
Hello Joachim, all,

I’m not aware of an upcoming or existing replacement for the thresholding 
engine, Roland may be able to speak on that, but the Monasca query language is 
currently expected to be implemented in the existing engine.

Thanks,
Ryan

From: "Barheine, Joachim" 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, March 13, 2017 at 2:10 AM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [monasca] future of thresholder on Storm

Hi all,

What are your plans for the Monasca thresholder on Apache Storm, especially 
with respect to the upcoming Monasca query language?

I saw that there is a component monasca-transform that is based on Apache Spark.

Are there plans to build a new alarming engine on Spark that is for example 
supporting mathematical expressions and multiple metrics in alarm rules?

Thanks and regards
Joachim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Jay Pipes

On 03/13/2017 11:13 AM, Dan Smith wrote:

Interestingly, we just had a meeting about cells and the scheduler,
which had quite a bit of overlap on this topic.


That said, as mentioned in the previous email, the priorities for Pike
(and likely Queens) will continue to be, in order: traits, ironic,
shared resource pools, and nested providers.


Given that the CachingScheduler is still a thing until we get claims in
the scheduler, and given that CachingScheduler doesn't use placement
like the FilterScheduler does, I think we need to prioritize the claims
part of the above list.

Based on the discussion several of us just had, the priority list
actually needs to be this:

1. Traits
2. Ironic
3. Claims in the scheduler
4. Shared resources
5. Nested resources

Claims in the scheduler is not likely to be a thing for Pike, but should
be something we do as much prep for as possible, and land early in Queens.

Personally, I think getting to the point of claiming in the scheduler
will be easier if we have placement in tree, and anything we break in
that process will be easier to backport if they're in the same tree.
However, I'd say that after that goal is met, splitting placement should
be good to go.


++

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Dan Smith
Interestingly, we just had a meeting about cells and the scheduler,
which had quite a bit of overlap on this topic.

> That said, as mentioned in the previous email, the priorities for Pike
> (and likely Queens) will continue to be, in order: traits, ironic,
> shared resource pools, and nested providers.

Given that the CachingScheduler is still a thing until we get claims in
the scheduler, and given that CachingScheduler doesn't use placement
like the FilterScheduler does, I think we need to prioritize the claims
part of the above list.

Based on the discussion several of us just had, the priority list
actually needs to be this:

1. Traits
2. Ironic
3. Claims in the scheduler
4. Shared resources
5. Nested resources

Claims in the scheduler is not likely to be a thing for Pike, but should
be something we do as much prep for as possible, and land early in Queens.

Personally, I think getting to the point of claiming in the scheduler
will be easier if we have placement in tree, and anything we break in
that process will be easier to backport if they're in the same tree.
However, I'd say that after that goal is met, splitting placement should
be good to go.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] propose Alex Schultz core on tripleo-heat-templates

2017-03-13 Thread Jiří Stránský

On 13.3.2017 15:30, Emilien Macchi wrote:

Hi,

Alex is already core on instack-undercloud and puppet-tripleo.
His involvement and knowledge in TripleO Heat Templates has been very
appreciated over the last months and I think we can give him +2 on
this project.

As usual, feel free to vote -1/+1 on this proposal.


+1



Thanks,




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] propose Alex Schultz core on tripleo-heat-templates

2017-03-13 Thread Marios Andreou
On Mon, Mar 13, 2017 at 4:30 PM, Emilien Macchi  wrote:

> Hi,
>
> Alex is already core on instack-undercloud and puppet-tripleo.
> His involvement and knowledge in TripleO Heat Templates has been very
> appreciated over the last months and I think we can give him +2 on
> this project.
>
> As usual, feel free to vote -1/+1 on this proposal.
>
>
+1



> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] propose Alex Schultz core on tripleo-heat-templates

2017-03-13 Thread Brent Eagles
On Mon, Mar 13, 2017 at 12:00 PM, Emilien Macchi  wrote:

> Hi,
>
> Alex is already core on instack-undercloud and puppet-tripleo.
> His involvement and knowledge in TripleO Heat Templates has been very
> appreciated over the last months and I think we can give him +2 on
> this project.
>
> As usual, feel free to vote -1/+1 on this proposal.
>
> Thanks,
> --
> Emilien Macchi
> ​
>

​+1 indeed.​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What do we want to be when we grow up?

2017-03-13 Thread Thierry Carrez
Joshua Harlow wrote:
> [...]
> * Be opinionated; let's actually pick *specific* technologies based on
> well thought out decisions about what we want out of those technologies
> and integrate them deeply (and if we make a bad decision, that's ok, we
> are all grown ups and we'll deal with it). IMHO it hasn't turned out
> well trying to have drivers for everything and everyone so let's umm
> stop doing that.

About "being all grown-ups and dealing with it", the problem is that
it's mostly an externality: the choice is done by developers and the
cost of handling the bad decision is carried by operators. Externalities
make for bad decisions.

I agree that having drivers for everything is nonsense. The model we
have started to promote (around base services) is an expand/contract
model: start by expanding support to a couple viable options, and then
once operators / the market decides on one winner, contract to only
supporting that winner, and start using the specific features of that
technology.

The benefit is that the final choice ends up being made by the
operators. Yes, it means that at the start you will have to do with the
lowest common denominator. But frankly at this stage it would be awesome
to just have the LCD of DLMs, rather than continue disagreeing on
Zookeeper vs. etcd and not even having that lowest common denominator.

> * Leads others; we are one of the older cloud foundations (I think?) so
> we should be leading others such as the CNCF and such, so we must be
> heavily outreaching to these others and helping them learn from our
> mistakes

We can always do more, but this is already happening. I was asked for
and provided early advice to the CNCF while they were setting up their
technical governance structure. Other foundations reached out to us to
discuss and adopt our vulnerability management models. There are a lot
more examples.

> [...]
> * Full control of infrastructure (mostly discard it); I don't think we
> necessarily need to have full control of infrastructure anymore. I'd
> rather target something that builds on the layers of others at this
> point and offers value there.

+1

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] propose Alex Schultz core on tripleo-heat-templates

2017-03-13 Thread Michele Baldessari
On Mon, Mar 13, 2017 at 10:30:08AM -0400, Emilien Macchi wrote:
> Hi,
> 
> Alex is already core on instack-undercloud and puppet-tripleo.
> His involvement and knowledge in TripleO Heat Templates has been very
> appreciated over the last months and I think we can give him +2 on
> this project.
> 
> As usual, feel free to vote -1/+1 on this proposal.

+1!

-- 
Michele Baldessari
C2A5 9DA3 9961 4FFB E01B  D0BC DDD4 DCCB 7515 5C6D

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][tripleo][release] puppet module versioning

2017-03-13 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2017-03-13 15:19:40 +0100:
> Doug Hellmann wrote:
> > Excerpts from Alex Schultz's message of 2017-03-09 11:29:28 -0700:
> >> On Thu, Mar 9, 2017 at 10:54 AM, Doug Hellmann  
> >> wrote:
> >>> Excerpts from Alex Schultz's message of 2017-03-07 12:56:17 -0700:
>  So what I'm proposing is to use a "-dev" pre-release identifier
>  between version releases for puppet modules.  As part of the tagging
>  process we could propose the next release version with "-dev" to the
>  repository.  The flow would look something like...
>  [...]
> 
> As an aside, we used to do that for pre-versioned deliverables (drop the
> "2015.1.0" as the "next" version in setup.cfg as the very first commit
> in master after a branch point). The main issue with it was that it
> required a lock on master to make it work (avoid other commits to make
> it into master before the version bump).

It's not perfect, but since we can't make puppet take its version
from the git tag (like we could with pbr), and the jobs would apply
only to the puppet repositories, it seems like a reasonable approach.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][release][deployment] Packaging problems due to branch/release ordering

2017-03-13 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2017-03-13 15:13:18 +0100:
> Jeremy Stanley wrote:
> > On 2017-03-09 12:12:01 +0100 (+0100), Alan Pevec wrote:
> >> 2017-03-09 10:41 GMT+01:00 Alfredo Moralejo Alonso :
> >>> On Wed, Mar 8, 2017 at 12:44 PM, Steven Hardy  wrote:
>  https://bugs.launchpad.net/tripleo/+bug/1669462
> 
>  I'm not clear on the best path forward at this point, but the simplest 
>  one
>  suggested so far is to simply tag a new pre-milestone/alpha release for 
>  all
>  master branches, which will enable testing upgrades to master.
> >>
> >> That would be indeed simplest and cleanest solution but it has been
> >> rejected in the past, IIUC with reasoning that projects are not sure
> >> what will be their next version. IMHO that should not be the case if

There is the problem that some deliverables don't know their next
version in advance. There is also the problem that the vast majority
of our deliverables do not use pre-release versions (like alphas)
at all.

> >> the project has decided to have a stable branch and follows semver and
> >> stable branch policy: in that case project will keep X.Y frozen on
> >> stable so master should be bumped to at least X.Y+1 immediately after
> >> branching stable.
> > [...]
> > 
> > In the past we addressed this by automatically merging the release
> > tag back into master, but we stopped doing that a cycle ago because
> > it complicated release note generation.
> 
> Right, there is a bit of history here.
> 
> One option is to tag the first commit on master after the stable branch
> point, to reduce (but not eliminate) the window where stable > master,
> but that relied on a lot of discipline and checks (since it's hard to
> automate).

Yes, I think the level of coordination involved in making this work for
all projects would be challenging to accomplish with the small release
team. We have the release task list down to a fairly small minimum right
now, and I don't relish the idea of adding more complexity if we can
avoid it.

> We preferred the option to merge tags back on master, which was not
> perfect (creates overlap in versions automatically generated) but at
> least the dev version was always > last released version.
> 
> However, the merged tags history prevented reno from determining
> correctly where it was and generate appropriate release notes. So we

The problem was with a tag appearing on multiple branches, and basically
being inserted into the master branch at a random spot, it wasn't
possible to actually tell which series a version belonged to and which
notes should go with which versions.

> dropped it... Robert Collins argued that users should consume from a
> single channel anyway: either follow master or follow a stable branch,
> and not combine both. Here the problem is that you test upgrades from
> stable/$foo to master, which is basically something the model does not
> support.

We test this upgrade scenario in the upstream CI, too. The difference
is that grenade can tell pip "install exactly the version I am
pointing to in this directory on disk," rather than relying on
version numbers to notice that an upgrade is needed (or should be
avoided, as the case may be).

Is it possible to do that with system packages in some way, too, without
pinning package versions in puppet or tripleo?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Chris Dent

On Mon, 13 Mar 2017, Sylvain Bauza wrote:


That way, we could do the necessary quirks in the client in case the
split goes bad.


I don't understand this statement. If the client is always using the
service catalog (which it should be) and the client is always only
aware of the HTTP interface (which it should be) what difference does
where the code lives make?

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-13 Thread Fox, Kevin M
So for cloud apps, you must have a config management system to do secure secret 
transport? A cloud app developer must target which config management tool? How 
do you know what the end user sites are running? Do you have to support all of 
them? Config management tooling is very religious, so if you don't pick the 
right one, folks will shun your app. Thats way too burdensome on app developers 
and users to require.

Instead, look at how aws does creds, and k8s. Any vm can just request a fresh 
token. In k8s, a fresh token is injected into the container automatically. This 
makes it extremely easy to deal with. This is what app developers are expecting 
now and openstack is turning folks away when we keep pushing a lot of work 
their way.

Openstacks general solution for any hard problem seems to be, make the 
operator, app dev, or user deal with it, not openstack.

Thats ok if those folks have no where else to go. They do now, and are starting 
to do so as there are more options.

Openstack needs to abandon that phylosophy. It can no longer afford it.

Thanks,
Kevin


From: Clint Byrum
Sent: Sunday, March 12, 2017 10:30:49 AM
To: openstack-dev
Subject: Re: [openstack-dev] [tc][appcat] The future of the App Catalog

Excerpts from Fox, Kevin M's message of 2017-03-12 16:54:20 +:
> I totally agree that policy management is a major problem too. A much bigger 
> one then instance users, and something I was hoping to get to after instance 
> users, but never made it past the easier of the two. :/
>
>
> The just inject creds solution keeps getting proposed, but only looks at the 
> surface of the issue so has a lot of issues under the hood. Lets dig in again.
>
> Lets say I create a heat template and inject creds through Parameters, as 
> that is the natural place for a user to fill out settings and launch their 
> application.
>
> The cred is loaded unencrypted into the heat database. Then heat-engine 
> pushes it into the nova database where it resides unencrypted, so it can be 
> sent to cloud init, usually also in an unencrypted form.
>
> You delete the heat stack, and the credential still sticks around in the nova 
> database long after the vm is deleted, as it keeps deleted data.
>
> The channels for passing stuff to a vm are much better at passing config to 
> the vm, not secrets.
>
> Its also a one shot way to get an initial cred to a vm, but not a way to 
> update it should the need arise. Also, how is the secret maintained in a way 
> that rebooting the vm works while snapshotting the vm doesn't capture the 
> secret, etc.
>
> The use case/issues are described exhaustively in the spec and describe why 
> its not something thats something that can easily be tackled by "just do X" 
> solutions. I proposed one implementation I think will work generally and 
> cover all bases. But am open to other implementations that cover all the 
> bases. Many half solutions have been proposed, but the whole point is 
> security, so a half solution that has big security holes in it isn't really a 
> solution.
>

_OR_, you inject a nonce that is used to authenticate the instance to
config management. If you're ever going to talk to anything outside of
the cloud APIs, you'll need this anyway.

Once you're involved with config management you are already sending
credentials of various types to your instances.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] propose Alex Schultz core on tripleo-heat-templates

2017-03-13 Thread Emilien Macchi
Hi,

Alex is already core on instack-undercloud and puppet-tripleo.
His involvement and knowledge in TripleO Heat Templates has been very
appreciated over the last months and I think we can give him +2 on
this project.

As usual, feel free to vote -1/+1 on this proposal.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Sylvain Bauza


Le 13/03/2017 15:17, Jay Pipes a écrit :
> On 03/13/2017 09:16 AM, Sylvain Bauza wrote:
>> Please don't.
>> Having a separate repository would mean that deployers would need to
>> implement a separate package for placement plus discussing about
>> how/when to use it.
> 
> Apparently, there already *are* separate packages for
> openstack-nova-api-placement...
> 

Good to know. That said, I'm not sure all deployers are packaging that
separately :-)

FWIW, I'm not against the split, I just think we should first have a
separate and clean client package for placement in a previous cycle.

My thoughts are :
 - in Pike/Queens (TBD), do placementclient optional with fallbacking to
scheduler.report
 - in Queens/R, make placementclient mandatory
 - in R/S, make Placement a separate service.

That way, we could do the necessary quirks in the client in case the
split goes bad.

-Sylvain


> Best,
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][tripleo][release] puppet module versioning

2017-03-13 Thread Thierry Carrez
Doug Hellmann wrote:
> Excerpts from Alex Schultz's message of 2017-03-09 11:29:28 -0700:
>> On Thu, Mar 9, 2017 at 10:54 AM, Doug Hellmann  wrote:
>>> Excerpts from Alex Schultz's message of 2017-03-07 12:56:17 -0700:
 So what I'm proposing is to use a "-dev" pre-release identifier
 between version releases for puppet modules.  As part of the tagging
 process we could propose the next release version with "-dev" to the
 repository.  The flow would look something like...
 [...]

As an aside, we used to do that for pre-versioned deliverables (drop the
"2015.1.0" as the "next" version in setup.cfg as the very first commit
in master after a branch point). The main issue with it was that it
required a lock on master to make it work (avoid other commits to make
it into master before the version bump).

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Jay Pipes

On 03/13/2017 09:16 AM, Sylvain Bauza wrote:

Please don't.
Having a separate repository would mean that deployers would need to
implement a separate package for placement plus discussing about
how/when to use it.


Apparently, there already *are* separate packages for 
openstack-nova-api-placement...


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Sylvain Bauza


Le 13/03/2017 14:59, Jay Pipes a écrit :
> On 03/13/2017 08:41 AM, Chris Dent wrote:
>>
>> From the start we've been saying that it is probably right for the
>> placement service to have its own repository. This is aligned with
>> the long term goal of placement being useful to many services, not
>> just nova, and also helps to keep placement contained and
>> comprehensible and thus maintainable.
>>
>> I've been worried for some time that the longer we put this off, the
>> more complicated an extraction becomes. Rather than carry on
>> worrying about it, I took some time over the weekend to experiment
>> with a slapdash extraction to see if I could identify what would be
>> the sticking points. The results are here
>>
>> https://github.com/cdent/placement
>>
>> My methodology was to lay in the basics for being able to run the
>> functional (gabbi) tests and then using the failures to fix the
>> code. If you read the commit log (there's only 16 commits) in
>> reverse it tells a little story of what was required.
>>
>> All the gabbi tests are now passing (without them being changed)
>> except for four that verify the response strings from exceptions. I
>> didn't copy in exceptions, I created them anew to avoid copying
>> unnecessary nova-isms, and didn't bother (for now) with replicating
>> keyword handling.
>>
>> Unit tests and non-gabbi functional tests were not transferred over
>> (as that would have been something more than "slapdash").
>>
>> Some observations or things to think about:
>>
>> * Since there's only one database and all the db query code is in
>>   the objects, the database handling is simplified. olso_db setup
>>   can be used more directly.
>>
>> * The objects being oslo versioned objects is kind of overkill in
>>   this context but doesn't get too much in the way.
>>
>> * I collapsed the fields.ResourceClass and objects.ResourceClass
>>   into the same file so the latter was renamed. Doing this
>>   exploration made a lot of the ResourceClass handling look pretty
>>   complicated. Much of that complexity is because we had to deal
>>   with evolving through different functionality. If we built this
>>   functionality in a greenfield repo it could probably be more
>>   simple.
>>
>> * The FaultWrapper middleware is turned off in the WSGI stack
>>   because copying it over from nova would require dealing with a
>>   hierarchy of classes. A simplified version of it would probably
>>   need to be stuck back in (and apparently a gabbi test to exercise
>>   it, as there's not one now).
>>
>> * The number of requirements in the two requirements files is nicely
>>   small.
>>
>> * The scheduler report client in nova, and to a minor degree the
>>   filter scheduler, use some of the same exceptions and ovo.objects
>>   that placement uses, which presents a bit of blechiness with
>>   regards to code duplication. I suppose long term we could consider
>>   a placement-lib or something like that, except that the
>>   functionality provided by the same-named objects and exceptions
>>   are not entirely congruent. From the point of view of the external
>>   part of the placement API what matters are not objects, but JSON
>>   structures.
>>
>> * I've done nothing here with regard to how devstack would choose
>>   between the old and new placement code locations but that will be
>>   something to solve. It seems like it ought to be possible for two
>>   different sources of the placement-code to exist; just register
>>   one endpoint. Since we've declared that service discovery is the
>>   correctly and only way to find placement, this ought to be okay.
>>
>> I'm not sure how or if we want to proceed with this topic, but I
>> think this at least allows us to talk about it with less guessing.
>> My generally summary is "yeah, this is doable, without huge amounts
>> of work."
> 
> Chris, great work on this over the weekend. It gives us some valuable
> data points and information to consider about the split out of the
> placement API. Really appreciate the effort.
> 
> A few things:
> 
> 1) Definitely agree on the need to have the Nova-side stuff *not*
> reference ovo objects for resource providers. We want the Nova side to
> use JSON/dict representations within the resource tracker and scheduler.
> This work can be done right now and isn't dependent on anything AFAIK.
> 
> 2) The FaultWrapper stuff can also be handled relatively free of
> dependencies. In fact, there is a spec around error reporting using
> codes in addition to messages [1] that we could tack on the FaultWrapper
> cleanup items. Basically, make that spec into a "fix up error handling
> in placement API" general work item list...
> 
> 3) While the split of the placement API is not the highest priority
> placement item in Pike (we are focused on traits, ironic integration,
> shared pools and then nested providers, in that order), I do think it's
> worthwhile splitting the placement service out from Nova in Queens. I
> don't believe 

Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Jay Pipes

On 03/13/2017 10:02 AM, Eoghan Glynn wrote:

We are close to the first milestone in Pike, right ? We also have
priorities for Placement that we discussed at the PTG and we never
discussed about how to cut placement during the PTG.

Also, we haven't discussed yet with operators about how they would like
to see Placement being cut. At least, we should wait for the Forum about
that.

For the moment, only operators using Ocata are using the placement API
and we know that most of them had problems when using it. Running for
cutting Placement in Queens would then mean that they would only have
one stable cycle after Ocata for using it.
Also, discussing at the above would then mean that we could punt other
disucssions. For example, I'd prefer to discuss how we could fix the
main problem we have with the scheduler about scheduler claims *before*
trying to think on how to cut Placement.


It's definitely good to figure out what challenges people were having in
rolling things out and document them, to figure out if they've been
addressed or not. One key thing seemed to be not understanding that
services need to all be registered in the catalog before services beyond
keystone are launched. There is also probably a keystoneauth1 fix for
this make it a softer fail.

The cut over can be pretty seamless. Yes, upgrade scenarios need to be
looked at. But that's honestly not much different from deprecating
config options or making new aliases. It should be much less user
noticable than the newly required cells v2 support.

The real question to ask, now that there is a well defined external
interface, will evolution of the Placement service stack, and addressing
bugs and shortcomings related to it's usage, work better as a dedicated
core team, or inside of Nova. My gut says Queens is the right time to
make that split, and to start planning for it now.


From a downstream perspective, I'd prefer to see a concentration on
deriving *user-visible* benefits from placement before incurring more
churn with an extraction (given the proximity to the churn on
deployment tooling from the scheduler decision-making cutover to
placement at the end of ocata).


The scheduler decision-making cutover *was* a user-visible benefit from 
the placement service. :)


Just because we could have done a better job with functional integration 
testing and documentation of the upgrade steps doesn't mean we should 
slow down progress here. We've learned lessons in Ocata around the need 
to be in a tighter feedback loop with the deployment teams.


Sean (and I) are merely suggesting to get the timeline for a split-out 
hammered out and ready for Queens so that we get ahead of the game and 
actually plan meetings with deployment folks and make sure docs and 
tests are proper ahead of the split-out.


That said, as mentioned in the previous email, the priorities for Pike 
(and likely Queens) will continue to be, in order: traits, ironic, 
shared resource pools, and nested providers.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][release][deployment] Packaging problems due to branch/release ordering

2017-03-13 Thread Thierry Carrez
Jeremy Stanley wrote:
> On 2017-03-09 12:12:01 +0100 (+0100), Alan Pevec wrote:
>> 2017-03-09 10:41 GMT+01:00 Alfredo Moralejo Alonso :
>>> On Wed, Mar 8, 2017 at 12:44 PM, Steven Hardy  wrote:
 https://bugs.launchpad.net/tripleo/+bug/1669462

 I'm not clear on the best path forward at this point, but the simplest one
 suggested so far is to simply tag a new pre-milestone/alpha release for all
 master branches, which will enable testing upgrades to master.
>>
>> That would be indeed simplest and cleanest solution but it has been
>> rejected in the past, IIUC with reasoning that projects are not sure
>> what will be their next version. IMHO that should not be the case if
>> the project has decided to have a stable branch and follows semver and
>> stable branch policy: in that case project will keep X.Y frozen on
>> stable so master should be bumped to at least X.Y+1 immediately after
>> branching stable.
> [...]
> 
> In the past we addressed this by automatically merging the release
> tag back into master, but we stopped doing that a cycle ago because
> it complicated release note generation.

Right, there is a bit of history here.

One option is to tag the first commit on master after the stable branch
point, to reduce (but not eliminate) the window where stable > master,
but that relied on a lot of discipline and checks (since it's hard to
automate).

We preferred the option to merge tags back on master, which was not
perfect (creates overlap in versions automatically generated) but at
least the dev version was always > last released version.

However, the merged tags history prevented reno from determining
correctly where it was and generate appropriate release notes. So we
dropped it... Robert Collins argued that users should consume from a
single channel anyway: either follow master or follow a stable branch,
and not combine both. Here the problem is that you test upgrades from
stable/$foo to master, which is basically something the model does not
support.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Small steps for Go

2017-03-13 Thread Davanum Srinivas
Update:

* We have a new git repo (EMPTY!) for the commons work -
http://git.openstack.org/cgit/openstack/golang-commons/
* The golang-client has little code, but lot of potential -
https://git.openstack.org/cgit/openstack/golang-client/

Anyone interested, please get cranking!

Oh, let's use #openstack-golang IRC channel to self-organize

Thanks,
Dims


On Tue, Mar 7, 2017 at 8:17 AM, Davanum Srinivas  wrote:
> Folks,
>
> Anyone interested? https://etherpad.openstack.org/p/go-and-containers
>
> Thanks,
> Dims
>
> --
> Davanum Srinivas :: https://twitter.com/dims



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Eoghan Glynn
> > We are close to the first milestone in Pike, right ? We also have
> > priorities for Placement that we discussed at the PTG and we never
> > discussed about how to cut placement during the PTG.
> >
> > Also, we haven't discussed yet with operators about how they would like
> > to see Placement being cut. At least, we should wait for the Forum about
> > that.
> >
> > For the moment, only operators using Ocata are using the placement API
> > and we know that most of them had problems when using it. Running for
> > cutting Placement in Queens would then mean that they would only have
> > one stable cycle after Ocata for using it.
> > Also, discussing at the above would then mean that we could punt other
> > disucssions. For example, I'd prefer to discuss how we could fix the
> > main problem we have with the scheduler about scheduler claims *before*
> > trying to think on how to cut Placement.
>
> It's definitely good to figure out what challenges people were having in
> rolling things out and document them, to figure out if they've been
> addressed or not. One key thing seemed to be not understanding that
> services need to all be registered in the catalog before services beyond
> keystone are launched. There is also probably a keystoneauth1 fix for
> this make it a softer fail.
>
> The cut over can be pretty seamless. Yes, upgrade scenarios need to be
> looked at. But that's honestly not much different from deprecating
> config options or making new aliases. It should be much less user
> noticable than the newly required cells v2 support.
>
> The real question to ask, now that there is a well defined external
> interface, will evolution of the Placement service stack, and addressing
> bugs and shortcomings related to it's usage, work better as a dedicated
> core team, or inside of Nova. My gut says Queens is the right time to
> make that split, and to start planning for it now.

>From a downstream perspective, I'd prefer to see a concentration on
deriving *user-visible* benefits from placement before incurring more
churn with an extraction (given the proximity to the churn on
deployment tooling from the scheduler decision-making cutover to
placement at the end of ocata).

Just my $0.02 ...

Cheers,
Eoghan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Jay Pipes

On 03/13/2017 08:41 AM, Chris Dent wrote:


From the start we've been saying that it is probably right for the
placement service to have its own repository. This is aligned with
the long term goal of placement being useful to many services, not
just nova, and also helps to keep placement contained and
comprehensible and thus maintainable.

I've been worried for some time that the longer we put this off, the
more complicated an extraction becomes. Rather than carry on
worrying about it, I took some time over the weekend to experiment
with a slapdash extraction to see if I could identify what would be
the sticking points. The results are here

https://github.com/cdent/placement

My methodology was to lay in the basics for being able to run the
functional (gabbi) tests and then using the failures to fix the
code. If you read the commit log (there's only 16 commits) in
reverse it tells a little story of what was required.

All the gabbi tests are now passing (without them being changed)
except for four that verify the response strings from exceptions. I
didn't copy in exceptions, I created them anew to avoid copying
unnecessary nova-isms, and didn't bother (for now) with replicating
keyword handling.

Unit tests and non-gabbi functional tests were not transferred over
(as that would have been something more than "slapdash").

Some observations or things to think about:

* Since there's only one database and all the db query code is in
  the objects, the database handling is simplified. olso_db setup
  can be used more directly.

* The objects being oslo versioned objects is kind of overkill in
  this context but doesn't get too much in the way.

* I collapsed the fields.ResourceClass and objects.ResourceClass
  into the same file so the latter was renamed. Doing this
  exploration made a lot of the ResourceClass handling look pretty
  complicated. Much of that complexity is because we had to deal
  with evolving through different functionality. If we built this
  functionality in a greenfield repo it could probably be more
  simple.

* The FaultWrapper middleware is turned off in the WSGI stack
  because copying it over from nova would require dealing with a
  hierarchy of classes. A simplified version of it would probably
  need to be stuck back in (and apparently a gabbi test to exercise
  it, as there's not one now).

* The number of requirements in the two requirements files is nicely
  small.

* The scheduler report client in nova, and to a minor degree the
  filter scheduler, use some of the same exceptions and ovo.objects
  that placement uses, which presents a bit of blechiness with
  regards to code duplication. I suppose long term we could consider
  a placement-lib or something like that, except that the
  functionality provided by the same-named objects and exceptions
  are not entirely congruent. From the point of view of the external
  part of the placement API what matters are not objects, but JSON
  structures.

* I've done nothing here with regard to how devstack would choose
  between the old and new placement code locations but that will be
  something to solve. It seems like it ought to be possible for two
  different sources of the placement-code to exist; just register
  one endpoint. Since we've declared that service discovery is the
  correctly and only way to find placement, this ought to be okay.

I'm not sure how or if we want to proceed with this topic, but I
think this at least allows us to talk about it with less guessing.
My generally summary is "yeah, this is doable, without huge amounts
of work."


Chris, great work on this over the weekend. It gives us some valuable 
data points and information to consider about the split out of the 
placement API. Really appreciate the effort.


A few things:

1) Definitely agree on the need to have the Nova-side stuff *not* 
reference ovo objects for resource providers. We want the Nova side to 
use JSON/dict representations within the resource tracker and scheduler. 
This work can be done right now and isn't dependent on anything AFAIK.


2) The FaultWrapper stuff can also be handled relatively free of 
dependencies. In fact, there is a spec around error reporting using 
codes in addition to messages [1] that we could tack on the FaultWrapper 
cleanup items. Basically, make that spec into a "fix up error handling 
in placement API" general work item list...


3) While the split of the placement API is not the highest priority 
placement item in Pike (we are focused on traits, ironic integration, 
shared pools and then nested providers, in that order), I do think it's 
worthwhile splitting the placement service out from Nova in Queens. I 
don't believe that doing claims in the placement API is something that 
needs to be completed before splitting out. I'll respond to Sylvain's 
thread about this separately.


Thanks again for your efforts this weekend,
-jay

[1] https://review.openstack.org/#/c/418393/


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Sean Dague
On 03/13/2017 09:33 AM, Sylvain Bauza wrote:

> 
> We are close to the first milestone in Pike, right ? We also have
> priorities for Placement that we discussed at the PTG and we never
> discussed about how to cut placement during the PTG.
> 
> Also, we haven't discussed yet with operators about how they would like
> to see Placement being cut. At least, we should wait for the Forum about
> that.
> 
> For the moment, only operators using Ocata are using the placement API
> and we know that most of them had problems when using it. Running for
> cutting Placement in Queens would then mean that they would only have
> one stable cycle after Ocata for using it.
> Also, discussing at the above would then mean that we could punt other
> disucssions. For example, I'd prefer to discuss how we could fix the
> main problem we have with the scheduler about scheduler claims *before*
> trying to think on how to cut Placement.

It's definitely good to figure out what challenges people were having in
rolling things out and document them, to figure out if they've been
addressed or not. One key thing seemed to be not understanding that
services need to all be registered in the catalog before services beyond
keystone are launched. There is also probably a keystoneauth1 fix for
this make it a softer fail.

The cut over can be pretty seamless. Yes, upgrade scenarios need to be
looked at. But that's honestly not much different from deprecating
config options or making new aliases. It should be much less user
noticable than the newly required cells v2 support.

The real question to ask, now that there is a well defined external
interface, will evolution of the Placement service stack, and addressing
bugs and shortcomings related to it's usage, work better as a dedicated
core team, or inside of Nova. My gut says Queens is the right time to
make that split, and to start planning for it now.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Sylvain Bauza


Le 13/03/2017 14:21, Sean Dague a écrit :
> On 03/13/2017 09:16 AM, Sylvain Bauza wrote:
>>
>>
>> Le 13/03/2017 13:41, Chris Dent a écrit :
>>>
>>> From the start we've been saying that it is probably right for the
>>> placement service to have its own repository. This is aligned with
>>> the long term goal of placement being useful to many services, not
>>> just nova, and also helps to keep placement contained and
>>> comprehensible and thus maintainable.
>>>
>>> I've been worried for some time that the longer we put this off, the
>>> more complicated an extraction becomes. Rather than carry on
>>> worrying about it, I took some time over the weekend to experiment
>>> with a slapdash extraction to see if I could identify what would be
>>> the sticking points. The results are here
>>>
>>> https://github.com/cdent/placement
>>>
>>> My methodology was to lay in the basics for being able to run the
>>> functional (gabbi) tests and then using the failures to fix the
>>> code. If you read the commit log (there's only 16 commits) in
>>> reverse it tells a little story of what was required.
>>>
>>> All the gabbi tests are now passing (without them being changed)
>>> except for four that verify the response strings from exceptions. I
>>> didn't copy in exceptions, I created them anew to avoid copying
>>> unnecessary nova-isms, and didn't bother (for now) with replicating
>>> keyword handling.
>>>
>>> Unit tests and non-gabbi functional tests were not transferred over
>>> (as that would have been something more than "slapdash").
>>>
>>> Some observations or things to think about:
>>>
>>> * Since there's only one database and all the db query code is in
>>>   the objects, the database handling is simplified. olso_db setup
>>>   can be used more directly.
>>>
>>> * The objects being oslo versioned objects is kind of overkill in
>>>   this context but doesn't get too much in the way.
>>>
>>> * I collapsed the fields.ResourceClass and objects.ResourceClass
>>>   into the same file so the latter was renamed. Doing this
>>>   exploration made a lot of the ResourceClass handling look pretty
>>>   complicated. Much of that complexity is because we had to deal
>>>   with evolving through different functionality. If we built this
>>>   functionality in a greenfield repo it could probably be more
>>>   simple.
>>>
>>> * The FaultWrapper middleware is turned off in the WSGI stack
>>>   because copying it over from nova would require dealing with a
>>>   hierarchy of classes. A simplified version of it would probably
>>>   need to be stuck back in (and apparently a gabbi test to exercise
>>>   it, as there's not one now).
>>>
>>> * The number of requirements in the two requirements files is nicely
>>>   small.
>>>
>>> * The scheduler report client in nova, and to a minor degree the
>>>   filter scheduler, use some of the same exceptions and ovo.objects
>>>   that placement uses, which presents a bit of blechiness with
>>>   regards to code duplication. I suppose long term we could consider
>>>   a placement-lib or something like that, except that the
>>>   functionality provided by the same-named objects and exceptions
>>>   are not entirely congruent. From the point of view of the external
>>>   part of the placement API what matters are not objects, but JSON
>>>   structures.
>>>
>>> * I've done nothing here with regard to how devstack would choose
>>>   between the old and new placement code locations but that will be
>>>   something to solve. It seems like it ought to be possible for two
>>>   different sources of the placement-code to exist; just register
>>>   one endpoint. Since we've declared that service discovery is the
>>>   correctly and only way to find placement, this ought to be okay.
>>>
>>> I'm not sure how or if we want to proceed with this topic, but I
>>> think this at least allows us to talk about it with less guessing.
>>> My generally summary is "yeah, this is doable, without huge amounts
>>> of work."
>>>
>>
>> Please don't.
>> Having a separate repository would mean that deployers would need to
>> implement a separate package for placement plus discussing about
>> how/when to use it.
>>
>> For the moment, I'd rather prefer to leave operators using the placement
>> API by using Nova first and then after like 3 or 4 cycles, possibly
>> discussing with them how to cut it.
>>
>> At the moment, I think that we already have a good priority for
>> placement in Nova, so I don't think it's a problem to still have it in Nova.
> 
> Given that the design was always to split (eventually), and part of that
> means that we get to start building up a dedicated core team, I'm not
> sure why waiting 3 or 4 additional cycles makes sense here.
> 
> I get that Pike is probably the wrong release to do this cut, given that
> it only *just* became mandatory. But It feels like saying this would be
> a Queens goal, and getting things structured in such a way that the
> split is easy (like any renaming of binaries, any 

Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Sean Dague
On 03/13/2017 09:16 AM, Sylvain Bauza wrote:
> 
> 
> Le 13/03/2017 13:41, Chris Dent a écrit :
>>
>> From the start we've been saying that it is probably right for the
>> placement service to have its own repository. This is aligned with
>> the long term goal of placement being useful to many services, not
>> just nova, and also helps to keep placement contained and
>> comprehensible and thus maintainable.
>>
>> I've been worried for some time that the longer we put this off, the
>> more complicated an extraction becomes. Rather than carry on
>> worrying about it, I took some time over the weekend to experiment
>> with a slapdash extraction to see if I could identify what would be
>> the sticking points. The results are here
>>
>> https://github.com/cdent/placement
>>
>> My methodology was to lay in the basics for being able to run the
>> functional (gabbi) tests and then using the failures to fix the
>> code. If you read the commit log (there's only 16 commits) in
>> reverse it tells a little story of what was required.
>>
>> All the gabbi tests are now passing (without them being changed)
>> except for four that verify the response strings from exceptions. I
>> didn't copy in exceptions, I created them anew to avoid copying
>> unnecessary nova-isms, and didn't bother (for now) with replicating
>> keyword handling.
>>
>> Unit tests and non-gabbi functional tests were not transferred over
>> (as that would have been something more than "slapdash").
>>
>> Some observations or things to think about:
>>
>> * Since there's only one database and all the db query code is in
>>   the objects, the database handling is simplified. olso_db setup
>>   can be used more directly.
>>
>> * The objects being oslo versioned objects is kind of overkill in
>>   this context but doesn't get too much in the way.
>>
>> * I collapsed the fields.ResourceClass and objects.ResourceClass
>>   into the same file so the latter was renamed. Doing this
>>   exploration made a lot of the ResourceClass handling look pretty
>>   complicated. Much of that complexity is because we had to deal
>>   with evolving through different functionality. If we built this
>>   functionality in a greenfield repo it could probably be more
>>   simple.
>>
>> * The FaultWrapper middleware is turned off in the WSGI stack
>>   because copying it over from nova would require dealing with a
>>   hierarchy of classes. A simplified version of it would probably
>>   need to be stuck back in (and apparently a gabbi test to exercise
>>   it, as there's not one now).
>>
>> * The number of requirements in the two requirements files is nicely
>>   small.
>>
>> * The scheduler report client in nova, and to a minor degree the
>>   filter scheduler, use some of the same exceptions and ovo.objects
>>   that placement uses, which presents a bit of blechiness with
>>   regards to code duplication. I suppose long term we could consider
>>   a placement-lib or something like that, except that the
>>   functionality provided by the same-named objects and exceptions
>>   are not entirely congruent. From the point of view of the external
>>   part of the placement API what matters are not objects, but JSON
>>   structures.
>>
>> * I've done nothing here with regard to how devstack would choose
>>   between the old and new placement code locations but that will be
>>   something to solve. It seems like it ought to be possible for two
>>   different sources of the placement-code to exist; just register
>>   one endpoint. Since we've declared that service discovery is the
>>   correctly and only way to find placement, this ought to be okay.
>>
>> I'm not sure how or if we want to proceed with this topic, but I
>> think this at least allows us to talk about it with less guessing.
>> My generally summary is "yeah, this is doable, without huge amounts
>> of work."
>>
> 
> Please don't.
> Having a separate repository would mean that deployers would need to
> implement a separate package for placement plus discussing about
> how/when to use it.
> 
> For the moment, I'd rather prefer to leave operators using the placement
> API by using Nova first and then after like 3 or 4 cycles, possibly
> discussing with them how to cut it.
> 
> At the moment, I think that we already have a good priority for
> placement in Nova, so I don't think it's a problem to still have it in Nova.

Given that the design was always to split (eventually), and part of that
means that we get to start building up a dedicated core team, I'm not
sure why waiting 3 or 4 additional cycles makes sense here.

I get that Pike is probably the wrong release to do this cut, given that
it only *just* became mandatory. But It feels like saying this would be
a Queens goal, and getting things structured in such a way that the
split is easy (like any renaming of binaries, any things that should
deprecate), would seem to be good goals for Pike.

-Sean

-- 
Sean Dague
http://dague.net


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Sylvain Bauza


Le 13/03/2017 13:41, Chris Dent a écrit :
> 
> From the start we've been saying that it is probably right for the
> placement service to have its own repository. This is aligned with
> the long term goal of placement being useful to many services, not
> just nova, and also helps to keep placement contained and
> comprehensible and thus maintainable.
> 
> I've been worried for some time that the longer we put this off, the
> more complicated an extraction becomes. Rather than carry on
> worrying about it, I took some time over the weekend to experiment
> with a slapdash extraction to see if I could identify what would be
> the sticking points. The results are here
> 
> https://github.com/cdent/placement
> 
> My methodology was to lay in the basics for being able to run the
> functional (gabbi) tests and then using the failures to fix the
> code. If you read the commit log (there's only 16 commits) in
> reverse it tells a little story of what was required.
> 
> All the gabbi tests are now passing (without them being changed)
> except for four that verify the response strings from exceptions. I
> didn't copy in exceptions, I created them anew to avoid copying
> unnecessary nova-isms, and didn't bother (for now) with replicating
> keyword handling.
> 
> Unit tests and non-gabbi functional tests were not transferred over
> (as that would have been something more than "slapdash").
> 
> Some observations or things to think about:
> 
> * Since there's only one database and all the db query code is in
>   the objects, the database handling is simplified. olso_db setup
>   can be used more directly.
> 
> * The objects being oslo versioned objects is kind of overkill in
>   this context but doesn't get too much in the way.
> 
> * I collapsed the fields.ResourceClass and objects.ResourceClass
>   into the same file so the latter was renamed. Doing this
>   exploration made a lot of the ResourceClass handling look pretty
>   complicated. Much of that complexity is because we had to deal
>   with evolving through different functionality. If we built this
>   functionality in a greenfield repo it could probably be more
>   simple.
> 
> * The FaultWrapper middleware is turned off in the WSGI stack
>   because copying it over from nova would require dealing with a
>   hierarchy of classes. A simplified version of it would probably
>   need to be stuck back in (and apparently a gabbi test to exercise
>   it, as there's not one now).
> 
> * The number of requirements in the two requirements files is nicely
>   small.
> 
> * The scheduler report client in nova, and to a minor degree the
>   filter scheduler, use some of the same exceptions and ovo.objects
>   that placement uses, which presents a bit of blechiness with
>   regards to code duplication. I suppose long term we could consider
>   a placement-lib or something like that, except that the
>   functionality provided by the same-named objects and exceptions
>   are not entirely congruent. From the point of view of the external
>   part of the placement API what matters are not objects, but JSON
>   structures.
> 
> * I've done nothing here with regard to how devstack would choose
>   between the old and new placement code locations but that will be
>   something to solve. It seems like it ought to be possible for two
>   different sources of the placement-code to exist; just register
>   one endpoint. Since we've declared that service discovery is the
>   correctly and only way to find placement, this ought to be okay.
> 
> I'm not sure how or if we want to proceed with this topic, but I
> think this at least allows us to talk about it with less guessing.
> My generally summary is "yeah, this is doable, without huge amounts
> of work."
> 

Please don't.
Having a separate repository would mean that deployers would need to
implement a separate package for placement plus discussing about
how/when to use it.

For the moment, I'd rather prefer to leave operators using the placement
API by using Nova first and then after like 3 or 4 cycles, possibly
discussing with them how to cut it.

At the moment, I think that we already have a good priority for
placement in Nova, so I don't think it's a problem to still have it in Nova.

My .02,
-Sylvain

> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api] POST /api-wg/news

2017-03-13 Thread Chris Dent

On Mon, 13 Mar 2017, Thierry Carrez wrote:


At the recent Board+TC+UC meeting, the User Committee presented the API
workgroup as a User Committee workgroup, based on data from:

https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee


Yeah, that's been ongoing for a while and not something any of us in
the working group have done anything about as it didn't seem to
cause any harm and it came with one benefit: We consistently get
handling for a room at Summit.

Since we haven't had any indications one way or another that it is
bad to be sort of multi-homed, we haven't bothered to change things.


That said, the API WG discussions seem to be mostly developer-driven,
happen on openstack-dev, and the openstack/api-wg repository is listed
under the TC-driven repositories:


It is certainly the case that we are mostly developers-of-openstack-
driven and we target those same developers as the audience for the
guidelines we create. However, it is also the case that the primary
reason we want to guide the developers is so they create APIs which
are consistent and usable by the end users.

So there's a bit of overlap.


It's not really a big deal (I'm fine with it either way), but since we
may want to list working groups on the governance website soon, it would
be great to clarify where that one falls...


Given the desire to have working groups that are TC governed and for
them to be listed, I think it would make sense to clarify the API-WG
as being governed by the TC. That's how we think and operate [1], so may
as well make it official. If we do so we'd still like to keep ties
with the User Committee strong.

Unless, of course, there's some objection from the User Committee or
edleafe and elmiko.

[1] In the sense that if it there is an unresolvable dispute, the TC
is the source of mediation and resolution.

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] - Best release to upgrade from LBaaS v1 to v2

2017-03-13 Thread Saverio Proto
On 10/03/17 17:49, Michael Johnson wrote:
> Yes, folks have recently deployed the dashboard with success.  I think you
> had that discussion on the IRC channel, so I won't repeat it here.
> 
> Please note, the neutron-lbaas-dashboard does not support LBaaS v1, you must
> have LBaaS v2 deployed for the neutron-lbaas-dashboard to work.  If you are
> trying to use LBaaS v1, you can use the legacy panels included in the older
> versions of horizon.
> 
[..CUT..]
> If you think there is an open bug for the dashboard, please report it in
> https://bugs.launchpad.net/neutron-lbaas-dashboard



Hello,
I updated the bug
https://bugs.launchpad.net/neutron-lbaas-dashboard/+bug/1621403

Can anyone clarify the version matrix to use between the horizon version
and the neutron-lbaas-dashboard panels versions ?

can anyone confirm that both files
_1481_project_ng_loadbalancersv2_panel.py and file
_1480_project_loadbalancersv2_panel.py need to be installed ?

Is it okay to use branch master of neutron-lbaas-dashboard with horizon
stable/newton ?

thank you

Saverio
















__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance][keystone][requirements] Prioritizing reviews for requirements u-c/g-r

2017-03-13 Thread Davanum Srinivas
Dear Glance and Keystone folks,

A few requirements reviews are stuck behind others, can you please
take a look and help with these?

Glance:
Fixing HostName and adding support for HostAddress -
https://review.openstack.org/#/c/439777/ (Needed for oslo.config min
change - https://review.openstack.org/#/c/442648/)
Fix incompatibilities with WebOb 1.7 -
https://review.openstack.org/#/c/423366/ (Needed for WebOb u-c change
- https://review.openstack.org/#/c/417591/)
Invoke monkey_patching early enough for eventlet 0.20.1 -
https://review.openstack.org/#/c/419074/ (Needed for eventlet u-c
change - https://review.openstack.org/#/c/417590/)

Keystone:
Small fixes for WebOb 1.7 compatibiltity -
https://review.openstack.org/#/c/422234/ (Needed for WebOb u-c change
- https://review.openstack.org/#/c/417591/)

Thanks,
Dims

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-13 Thread Chris Dent



From the start we've been saying that it is probably right for the

placement service to have its own repository. This is aligned with
the long term goal of placement being useful to many services, not
just nova, and also helps to keep placement contained and
comprehensible and thus maintainable.

I've been worried for some time that the longer we put this off, the
more complicated an extraction becomes. Rather than carry on
worrying about it, I took some time over the weekend to experiment
with a slapdash extraction to see if I could identify what would be
the sticking points. The results are here

https://github.com/cdent/placement

My methodology was to lay in the basics for being able to run the
functional (gabbi) tests and then using the failures to fix the
code. If you read the commit log (there's only 16 commits) in
reverse it tells a little story of what was required.

All the gabbi tests are now passing (without them being changed)
except for four that verify the response strings from exceptions. I
didn't copy in exceptions, I created them anew to avoid copying
unnecessary nova-isms, and didn't bother (for now) with replicating
keyword handling.

Unit tests and non-gabbi functional tests were not transferred over
(as that would have been something more than "slapdash").

Some observations or things to think about:

* Since there's only one database and all the db query code is in
  the objects, the database handling is simplified. olso_db setup
  can be used more directly.

* The objects being oslo versioned objects is kind of overkill in
  this context but doesn't get too much in the way.

* I collapsed the fields.ResourceClass and objects.ResourceClass
  into the same file so the latter was renamed. Doing this
  exploration made a lot of the ResourceClass handling look pretty
  complicated. Much of that complexity is because we had to deal
  with evolving through different functionality. If we built this
  functionality in a greenfield repo it could probably be more
  simple.

* The FaultWrapper middleware is turned off in the WSGI stack
  because copying it over from nova would require dealing with a
  hierarchy of classes. A simplified version of it would probably
  need to be stuck back in (and apparently a gabbi test to exercise
  it, as there's not one now).

* The number of requirements in the two requirements files is nicely
  small.

* The scheduler report client in nova, and to a minor degree the
  filter scheduler, use some of the same exceptions and ovo.objects
  that placement uses, which presents a bit of blechiness with
  regards to code duplication. I suppose long term we could consider
  a placement-lib or something like that, except that the
  functionality provided by the same-named objects and exceptions
  are not entirely congruent. From the point of view of the external
  part of the placement API what matters are not objects, but JSON
  structures.

* I've done nothing here with regard to how devstack would choose
  between the old and new placement code locations but that will be
  something to solve. It seems like it ought to be possible for two
  different sources of the placement-code to exist; just register
  one endpoint. Since we've declared that service discovery is the
  correctly and only way to find placement, this ought to be okay.

I'm not sure how or if we want to proceed with this topic, but I
think this at least allows us to talk about it with less guessing.
My generally summary is "yeah, this is doable, without huge amounts
of work."

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-03-13 Thread Brent Eagles
Hi all,

Not that it matters one way or the other, Carlos's comment reminded me of
some trivia regarding owl eye color that I had read recently:

https://owlpedia.wordpress.com/2015/08/06/why-do-owls-have-different-colour-eyes/

On Mon, Mar 13, 2017 at 6:16 AM, Carlos Camacho Gonzalez <
ccama...@redhat.com> wrote:

> Hey!
>
> Yeahp I think most owls have yellow/orange eyes, this can be a good thing
> to try.
>
> Cheers,
> Carlos
>
> On Fri, Mar 10, 2017 at 8:53 PM, Dan Sneddon  wrote:
>
>> On 03/10/2017 08:26 AM, Heidi Joy Tretheway wrote:
>> > Hi TripleO team,
>> >
>> > Here’s an update on your project logo. Our illustrator tried to be as
>> > true as possible to your original, while ensuring it matched the line
>> > weight, color palette and style of the rest. We also worked to make sure
>> > that three Os in the logo are preserved. Thanks for your patience as we
>> > worked on this! Feel free to direct feedback to me.
>> >
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> This is a huge improvement! Some of the previous drafts looked more like
>> a generic bird and less like an owl.
>>
>> I have a suggestion on how this might be more owl-like. If you look at
>> real owl faces [1], you will see that their eyes are typically yellow,
>> and they often have a white circle around the eyes (black pupil, yellow
>> eye, black/white circle of feathers). I think that we could add a yellow
>> ring around the black pupil, and possibly accentuate the ears (since
>> owls often have white tufts on their ears).
>>
>> I whipped up a quick example of what I'm talking about, it's attached
>> (hopefully it will survive the mailing list).
>>
>> [1] - https://www.google.com/search?q=owl+face=isch=u=univ
>>
>> --
>> Dan Sneddon |  Senior Principal OpenStack Engineer
>> dsned...@redhat.com |  redhat.com/openstack
>> dsneddon:irc|  @dxs:twitter
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] notification update week 10

2017-03-13 Thread Balazs Gibizer

Hi,

Here is the status update / focus setting mail about notification work 
for week 10.


Bugs

There are couple of important outstanding bugs:

[High] https://bugs.launchpad.net/nova/+bug/1665263 The legacy 
instance.delete notification is missing for unscheduled instance. The 
Pike fix has been merged. The Ocata backport has been proposed 
https://review.openstack.org/#/c/441171/


[High] https://bugs.launchpad.net/nova/+bug/1671847 Incorrect set 
deprecated flag for notify_on_state_change . Patch is in the gate queue 
https://review.openstack.org/#/c/444357 and the Ocata backport has been 
proposed https://review.openstack.org/#/c/444374


[Medium] https://bugs.launchpad.net/nova/+bug/1657428 The instance
notifications are sent with inconsistent timestamp format. Fix is ready 
for the cores to review https://review.openstack.org/#/c/421981



Versioned notification transformation
-
Patches that just need a second core:
* https://review.openstack.org/#/c/384922 Transform instance.rebuild
notification
* https://review.openstack.org/#/c/396621 Transform
instance.rebuild.error notification
* https://review.openstack.org/#/c/382959 Transform instance.reboot
notification

Here is the set off patches we would like to focus on this week:
* https://review.openstack.org/#/c/411791/ Transform 
instance.reboot.error notification
* https://review.openstack.org/#/c/401992/ Transform 
instance.volume_attach notification
* https://review.openstack.org/#/c/408676/ Transform 
instance.volume_detach notification


There are a patches that needs to be respin to target the new Pike 
blueprint versioned-notification-transformation-pike. Also there are 
patches in merge conflict that needs to be rebased.



Searchlight integration
---
Listing instances
~
The list instances spec https://review.openstack.org/#/c/441692/ was 
well discussed last week. There is still discussion about how to keep 
the Searchlight data in sync with the Nova db with deleted or deleted 
and archived instances.


It turned out that Searchlight still needs to be adapted to Nova's 
versioned notifiations so the 
https://blueprints.launchpad.net/searchlight/+spec/nova-versioned-notifications 
bp is a hard dependency for the integration work.



bp additional-notification-fields-for-searchlight
~~
4 patch from blueprint additional-notification-fields-for-searchlight
needs to be rebased due to merge conflict but the code seem ready to be 
merged:

https://review.openstack.org/#/q/label:Code-Review%253E%253D1+status:open+branch:master+topic:bp/additional-notification-fields-for-searchlight

The blueprint needs discussion about how to model BlockDeviceMapping in
the instance notifications. I ran out of time last week so I will try 
to draft the BDM payload object early this week.



Other items
---
Short circuit notification payload generation
~
The oslo messaging dependency has been merged and new oslo messaging 
library release is promised early this week. So work soon be continued 
on the nova patch https://review.openstack.org/#/c/428260/


bp json-schema-for-versioned-notifications
~~~
We shortly discussed that this feature might help keeping the 
Searchlight data model in sync with the nova notification model. 
However we still waiting for somebody to pick up the implementation 
work behind the bp

https://blueprints.launchpad.net/nova/+spec/json-schema-for-versioned-notifications


Weekly meeting
--
The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC 
on openstack-meeting-4 so the next meeting will be held on 14th of 
March 
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170314T17


Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api] POST /api-wg/news

2017-03-13 Thread Thierry Carrez
Ed Leafe wrote:
> On Mar 9, 2017, at 9:30 PM, Everett Toews  wrote:
> 
>>> For those who don't know, Everett and I started the API-WG a few years ago,
>>
>> Umm...given this statement, I feel I _must_ provide some clarity.
>>
>> The API WG was started by myself, Jay Pipes, and Chris Yeoh. The ball got 
>> rolling in this email thread [2]. One of our first artifacts was this wiki 
>> page [3]. The 3 of us were the original core members (I'm not sure if [4] is 
>> capable of displaying history). Then we started making commits to the repo 
>> [5]. I'll leave it as an exercise to the reader to determine who got 
>> involved, when they got involved, and in what capacity.
> 
> Sorry if my memory is shaky - mea culpa. I thought Everett and I had had 
> discussions about starting something like this in early 2014 back when we 
> were working at Rackspace and both fighting the inconsistency of the 
> OpenStack APIs. But true, nothing concrete was done at that time.
> 
> What I *do* remember quite clearly when I returned to the world of OpenStack 
> in late 2014 was that Everett was very active in making the group a relevant 
> force for improvement, and has been ever since then. That is the main point I 
> want to emphasize.

While we are exploring the history of the API WG, I'd like a quick fact
check...

At the recent Board+TC+UC meeting, the User Committee presented the API
workgroup as a User Committee workgroup, based on data from:

https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee

That said, the API WG discussions seem to be mostly developer-driven,
happen on openstack-dev, and the openstack/api-wg repository is listed
under the TC-driven repositories:

http://git.openstack.org/cgit/openstack/governance/tree/reference/technical-committee-repos.yaml

It's not really a big deal (I'm fine with it either way), but since we
may want to list working groups on the governance website soon, it would
be great to clarify where that one falls...

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][nova][cinder] Testing of a microversioned world

2017-03-13 Thread Andrea Frittoli
On Mon, Mar 13, 2017 at 6:52 AM Ghanshyam Mann 
wrote:

> On Sun, Mar 12, 2017 at 7:40 AM, Matt Riedemann 
> wrote:
>
> On 3/10/2017 3:02 PM, Andrea Frittoli wrote:
>
>
> We had a couple of sessions related to this topic at the PTG [0][1].
>
> We agreed that we want to still maintain integration tests only in
> Tempest, which means that API micro versions that have no integration
> impact can be tested via functional tests.
>
>
> To be clear, "integration" here means tests that span operations across
> multiple services, correct? Like a compute test that first creates a port
> in the networking service and a volume in the block storage service and
> then uses those to create a server and maybe take a snapshot of it which is
> then verified was uploaded to the image service.
>
> The non-integration things are self-contained in a single service, like if
> all you need to do is create an aggregate, show it's details and validate
> the response, at a particular microversion, we can just do that in nova
> functional tests, and it's not necessary in Tempest.
>
> It might be worth having a definition of this policy in the Tempest docs
> so when people ask this question again you can just point at the docs.
>
>
> ​Yes, that will be helpful to have those agreement in doc​. I am adding
> those in existing microversion testing doc [0] and later will be
> refactoring to give them better and more visible shape.
>
> Tempest maintain the info about what microversion tests are is implemented
> on Tempest side which will be helpful for projects to check the coverage
> [1]. Cinder one was missed which I added in [2].
>
> ​As summary:
> Tempest Scope:
> - Only Integration tests for microversion is allowed in Tempest
>
If a microversion is important for interoperability it may end up in
Tempest even if doesn't involve too much integration


> - Exception for non integration tests to fill the schema gap if exists.
>
Yeah when we fill schema gaps we need at least one test for the top version
to verify that the latest implemented schema is valid


>
> Project Scope:
> - Remaining tests coverage of microversion​ should be on Projects side.
>   - Project can cover those as functional tests etc.
>   - Nova currently have at least one functional tests per each
> microversion and plan to extend coverage like [3]
>   - IMO, only running tests with 'latest' microversion is not enough,
> each microversion should be tested with positive and negative testing
> (w.r.t at least immediate previous microversion).
>
>
>
>
> In terms of which versions we test in the gate, for nova we always run
> with min_microversion = None and max_microversion = latest, which means
> that all tests will be executed.
> Since micro versions are incremental, and each micro version usually
> involves no or one test on Tempest side, I think it will be a while
> before this becomes an issue for the common gate.
>
>
> We test max_microversion=latest only on master. On the devstack stable
> branch we cap the max_microversion, e.g.:
>
>
> https://github.com/openstack-dev/devstack/blob/stable/newton/lib/tempest#L339
>
> --
>
> Thanks,
>
> Matt
>
>
>
>
> ..0 https://review.openstack.org/#/c/444727/1
> ..1
> https://docs.openstack.org/developer/tempest/microversion_testing.html#microversion-tests-implemented-in-tempest
>
> ..2 https://review.openstack.org/#/c/444711/ ​
> ​..3
> https://blueprints.launchpad.net/nova/+spec/nova-microversion-functional-tests
>  ​
>
> ​​Thanks
> gmann​
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][nova][cinder] Testing of a microversioned world

2017-03-13 Thread Andrea Frittoli
On Sat, Mar 11, 2017 at 10:45 PM Matt Riedemann  wrote:

On 3/10/2017 3:02 PM, Andrea Frittoli wrote:
>
> We had a couple of sessions related to this topic at the PTG [0][1].
>
> We agreed that we want to still maintain integration tests only in
> Tempest, which means that API micro versions that have no integration
> impact can be tested via functional tests.

To be clear, "integration" here means tests that span operations across
multiple services, correct? Like a compute test that first creates a
port in the networking service and a volume in the block storage service
and then uses those to create a server and maybe take a snapshot of it
which is then verified was uploaded to the image service.

Yes, indeed.



The non-integration things are self-contained in a single service, like
if all you need to do is create an aggregate, show it's details and
validate the response, at a particular microversion, we can just do that
in nova functional tests, and it's not necessary in Tempest.

It might be worth having a definition of this policy in the Tempest docs
so when people ask this question again you can just point at the docs.


I agree, we need to go a better job at documenting this.
I just want to avoid having a black and white rule that would not work.

To me tests that should be in Tempest are tests that make sense to run
against any change in any repo, and the reasons for that can be many:
- the test involves multiple services (more that "it uses a token though")
- the test covers a feature which is key for interoperability
- the test must be reliable and not take too long

Based on this I don't it's possible to completely avoid the discussion about
which test should in Tempest and which not, it's something to be considered
on a test by test basis, based on a set of guidelines.

We have an initial definition of scope in [0], but it's probably worth to
elaborate
more on it. I'll put up a patch so that the discussion on it can continue in
gerrit.

[0]
https://docs.openstack.org/developer/tempest/test-removal.html#tempest-scope


>
> In terms of which versions we test in the gate, for nova we always run
> with min_microversion = None and max_microversion = latest, which means
> that all tests will be executed.
> Since micro versions are incremental, and each micro version usually
> involves no or one test on Tempest side, I think it will be a while
> before this becomes an issue for the common gate.

We test max_microversion=latest only on master. On the devstack stable
branch we cap the max_microversion, e.g.:

Thanks, good point.



https://github.com/openstack-dev/devstack/blob/stable/newton/lib/tempest#L339

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Mistral Custom Actions API Design

2017-03-13 Thread lương hữu tuấn
On Mon, Mar 13, 2017 at 9:34 AM, Thomas Herve  wrote:

> On Fri, Mar 10, 2017 at 9:52 PM, Ryan Brady  wrote:
> >
> > One of the pain points for me as an action developer is the OpenStack
> > actions[1].  Since they all use the same method name to retrieve the
> > underlying client, you cannot simply inherit from more than one so you
> are
> > forced to rewrite the client access methods.  We saw this in creating
> > actions for TripleO[2].  In the base action in TripleO, we have actions
> that
> > make calls to more than one OpenStack client and so we end up re-writing
> and
> > maintaining code.  IMO the idea of using multiple inheritance there
> would be
> > helpful.  It may not require the mixin approach here, but rather a design
> > change in the generator to ensure the method names don't match.
>
> Is there any reason why those methods aren't functions? AFAICT they
> don't use the instance, they could live top level in the action module
> and be accessible by all actions. If you can avoid multiple
> inheritance (or inheritance!) you'll simplify the design. You could
> also do client = NovaAction().get_client() in your own action (if
> get_client was a public method).
>
> --
> Thomas
>
> If you want to do that, you need to change the whole structure of base
action and the whole way of creating an action
as you have described and IMHO, i myself do not like this idea:

1. Mistral is working well (at the standpoint of creating action) and
changing it is not a short term work.
2. Using base class to create base action is actually a good idea in order
to control and make easy to action developers.
The base class will define the whole mechanism to execute an action,
developers do not need to take care of it, just only
providing OpenStack clients (the _create_client() method).
3. From the #2 point of view, the alternative to NovaAction().get_client()
does not make sense since the problem here is subclass mechanism,
not the way to call get_client().

@Renat: I myself not against to multiple inheritance too, the only thing is
if we want to make it multiple inheritance, we should think about it more
thoroughly for the hierarchy of inheritance, what each inheritance layer
does, etc. These work will make the multiple inheritance easy to understand
and for action developers as well easy to develop. So, IMHO, i vote for
make it simple, easy to understand first (if you continue with mistral-lib)
and then do the next thing later.

Br,

Tuan/Nokia

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A Summary of Atlanta PTG Summaries (!)

2017-03-13 Thread Hugh Blemings

Hi Emilien, All,

On 8/3/17 09:26, Emilien Macchi wrote:

On Mon, Mar 6, 2017 at 10:45 PM, Hugh Blemings  wrote:

Hiya,

As has been done for the last few Summits/PTGs in Lwood[1] I've pulled
together a list of the various posts to openstack-dev that summarise things
at the PTG - projects, videos, anecdotes etc.

The aggregated list is in this blog post;

http://hugh.blemings.id.au/2017/03/07/openstack-ptg-atlanta-2017-summary-of-summaries/

Which should aggregate across to planet.openstack.org as well.

I'll update this as further summaries appear, corrections/contributions
welcome.


Thanks Hugh, that's awesome!


Most welcome :)


Can you please remove the first link: TripleO:
http://lists.openstack.org/pipermail/openstack-dev/2017-February/112893.html
and only keep this one:
http://lists.openstack.org/pipermail/openstack-dev/2017-February/112995.html


Done, apologies for the delay, got behind in my email this week!

Cheers,
Hugh


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [monasca] future of thresholder on Storm

2017-03-13 Thread Barheine, Joachim
Hi all,

What are your plans for the Monasca thresholder on Apache Storm, especially 
with respect to the upcoming Monasca query language?

I saw that there is a component monasca-transform that is based on Apache Spark.

Are there plans to build a new alarming engine on Spark that is for example 
supporting mathematical expressions and multiple metrics in alarm rules?

Thanks and regards
Joachim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-03-13 Thread Carlos Camacho Gonzalez
Hey!

Yeahp I think most owls have yellow/orange eyes, this can be a good thing
to try.

Cheers,
Carlos

On Fri, Mar 10, 2017 at 8:53 PM, Dan Sneddon  wrote:

> On 03/10/2017 08:26 AM, Heidi Joy Tretheway wrote:
> > Hi TripleO team,
> >
> > Here’s an update on your project logo. Our illustrator tried to be as
> > true as possible to your original, while ensuring it matched the line
> > weight, color palette and style of the rest. We also worked to make sure
> > that three Os in the logo are preserved. Thanks for your patience as we
> > worked on this! Feel free to direct feedback to me.
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> This is a huge improvement! Some of the previous drafts looked more like
> a generic bird and less like an owl.
>
> I have a suggestion on how this might be more owl-like. If you look at
> real owl faces [1], you will see that their eyes are typically yellow,
> and they often have a white circle around the eyes (black pupil, yellow
> eye, black/white circle of feathers). I think that we could add a yellow
> ring around the black pupil, and possibly accentuate the ears (since
> owls often have white tufts on their ears).
>
> I whipped up a quick example of what I'm talking about, it's attached
> (hopefully it will survive the mailing list).
>
> [1] - https://www.google.com/search?q=owl+face=isch=u=univ
>
> --
> Dan Sneddon |  Senior Principal OpenStack Engineer
> dsned...@redhat.com |  redhat.com/openstack
> dsneddon:irc|  @dxs:twitter
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Mistral Custom Actions API Design

2017-03-13 Thread Thomas Herve
On Fri, Mar 10, 2017 at 9:52 PM, Ryan Brady  wrote:
>
> One of the pain points for me as an action developer is the OpenStack
> actions[1].  Since they all use the same method name to retrieve the
> underlying client, you cannot simply inherit from more than one so you are
> forced to rewrite the client access methods.  We saw this in creating
> actions for TripleO[2].  In the base action in TripleO, we have actions that
> make calls to more than one OpenStack client and so we end up re-writing and
> maintaining code.  IMO the idea of using multiple inheritance there would be
> helpful.  It may not require the mixin approach here, but rather a design
> change in the generator to ensure the method names don't match.

Is there any reason why those methods aren't functions? AFAICT they
don't use the instance, they could live top level in the action module
and be accessible by all actions. If you can avoid multiple
inheritance (or inheritance!) you'll simplify the design. You could
also do client = NovaAction().get_client() in your own action (if
get_client was a public method).

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Meeting Thursday Mar 9th at 9:00 UTC (about bug: test_get_volume_absolute_limits fails with no admin credentials)

2017-03-13 Thread zhu.fanglei
Thanks gmann, 


I've uploaded a patch to move it, and fortunetly the testcase is not used in 
defcore.












Original Mail



Sender:  <ghanshyamm...@gmail.com>
To:  <openstack-dev@lists.openstack.org>
Date: 2017/03/13 15:30
Subject: Re: [openstack-dev] [QA] Meeting Thursday Mar 9th at 9:00 UTC (about 
bug: test_get_volume_absolute_limits fails with no admin credentials)







This should be moved to admin dir with clear note like - [0].




force_tenant_isolation is right thing in that tests to avoid the test fails due 
to other tenants volumes etc. We have similar case for quota tests too.







..0 
https://github.com/openstack/tempest/blob/fe1a8e289c2d79df29beaa6b3603afe5feb60fb3/tempest/api/compute/admin/test_auto_allocate_network.py#L31
 



-gmann





On Mon, Mar 13, 2017 at 3:29 PM,  <zhu.fang...@zte.com.cn> wrote:

hello qa team,





As to #1671256 test_get_volume_absolute_limits fails with no admin credentials




This testcase will fail when admin credenticals are not present, because it 
uses force_tenant_isolation = True and is not in admin dirs(if it was, it would 
be skipped instead of fail)




so, there are 3 possible solutions:

1) to move this testcase into admin dirs

2) to make Tempest skip testcases when 



force_tenant_isolation = True and admin credenticals are not present


3) to fetch the existsing resources in the system, which will let the testcase 
run without admin credenticals, but may cause the code look a bit longer.




so, I'd like you to give me some advice, which do you perfer,  thanks:)




BR

zhufl






Original Mail



Sender:  <luz.caza...@intel.com>
To:  <openstack-dev@lists.openstack.org>
Date: 2017/03/09 06:28
Subject: Re: [openstack-dev] [QA] Meeting Thursday Mar 9th at 9:00 UTC







Hey gmann


 


I did bug triage last weeks (27th Feb – 3rd March).


 


So far 7 new bugs arrived:


3 confirmed (2 low, 1 medium)


#1669455 tempest run --regex '\[.*\bsmoke\b.*\]' 
--blacklist-file=/home/jenkins/tempest/skip.txt  not able to run smoke tests


#1668690 Tempest test_server_actions missing evacuate server test (and client)


#1668407 os-assisted-volume-snapshots client is not available


1 in-progress


#1667354 Volume v2 capabilities & scheduler-stats service clients makes v1 APIs 
call


2 incomplete


#1667443 API endpoints become inaccessible 
after tempest run


#1671256 test_get_volume_absolute_limits fails 
with no admin credentials


1 reviewing – not sure if it is a valid bug or working as 
designed – Related to resource_cleanup vs teardown (clean after each test case).


#1670693 Tests in 
tempest.api.network.test_routers.RoutersTest don't clean up network resources


   


Please let me know if comments or questions


 


 



From: Ghanshyam Mann <ghanshyamm...@gmail.com>
 Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
 Date: Wednesday, March 8, 2017 at 1:09 AM
 To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org>
 Subject: [openstack-dev] [QA] Meeting Thursday Mar 9th at 9:00 UTC



 


Hello everyone,


Please reminder that the weekly OpenStack QA team IRC meeting will be Thursday, 
Mar 9th at 9:00 UTC in the #openstack-meeting channel.


The agenda for the meeting can be found here:


https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_March_9th_2017_.280900_UTC.29
 


Anyone is welcome to add an item to the agenda.



-gmann




















__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [acceleration]Atlanta PTG Summary

2017-03-13 Thread Zhipeng Huang
Hi Team,

Sorry for the long delay after our ptg session. Here are the main takeaways
from the meeting (both f2f and virtual
 ):

*1. Pike Development Goal*

We will shoot for an basic functional Cyborg code base for the Pike release
cycle. We will build the initial modules based upon Roman's proposal
. These initial
modules are:

*API*: as abstract as possible, and not necessary for multi-tenancy usage
in Pike.
*DB*: Maintain Cyborg specific device information (for example metadata
extracted during the discovery phase)
*Drivers*: support out-of-tree vendor solution in the long run, and try to
come up with a generic driver (maybe based upon virtio interfaces) for Pike
to get basic functionalities.
*Agent*: the "compute" part of Cyborg thats directly talks to the
accelerators/dedicated devices.

We have designated the prime sponsors for these modules and look
forward to *the
initial BPs at the next team meeting (which will be Mar 15th, Wed this
week)*

*2. Sandboxing proposals*

Per agreed in previous meetings as well as ptg, we will setup a sandbox
folder for the design proposals (we've covered one from Mellanox and one
from Huawei)

*3. Specific use case targeting*

The task illustrated in section 1 is for building the common code
structure, however we still need further work on specific use cases.
Although not fully discussed during ptg meeting, Harm's proposal
 will
continue to be discussed as the main input for FPGA use case, whereas the
requirements  coming out
of Scientific WG would be the main one for GPU.

We are looking at to provide basic solutions for these two use cases in
Pike.

*Reminder: our bi-weekly meeting resumes on Wed this week :)*



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Meeting Thursday Mar 9th at 9:00 UTC (about bug: test_get_volume_absolute_limits fails with no admin credentials)

2017-03-13 Thread Ghanshyam Mann
This should be moved to admin dir with clear note like - [0].

force_tenant_isolation is right thing in that tests to avoid the test fails
due to other tenants volumes etc. We have similar case for quota tests too.


..0
https://github.com/openstack/tempest/blob/fe1a8e289c2d79df29beaa6b3603afe5feb60fb3/tempest/api/compute/admin/test_auto_allocate_network.py#L31


-gmann

On Mon, Mar 13, 2017 at 3:29 PM,  wrote:

> hello qa team,
>
>
> As to #1671256 test_get_volume_absolute_limits fails with no admin
> credentials 
>
>
> This testcase will fail when admin credenticals are not present, because
> it uses force_tenant_isolation = True and is not in admin dirs(if it was,
> it would be skipped instead of fail)
>
>
> so, there are 3 possible solutions:
>
> 1) to move this testcase into admin dirs
>
> 2) to make Tempest skip testcases when
> ​​
> force_tenant_isolation = True and admin credenticals are not present
>
> 3) to fetch the existsing resources in the system, which will let the
> testcase run without admin credenticals, but may cause the code look a bit
> longer.
>
>
> so, I'd like you to give me some advice, which do you perfer,  thanks:)
>
>
> BR
>
> zhufl
>
>
> Original Mail
> *Sender: * <luz.caza...@intel.com>;
> *To: * <openstack-dev@lists.openstack.org>;
> *Date: *2017/03/09 06:28
> *Subject: **Re: [openstack-dev] [QA] Meeting Thursday Mar 9th at 9:00 UTC*
>
>
> Hey gmann
>
>
>
> I did bug triage last weeks (27th Feb – 3rd March).
>
>
>
> So far 7 new bugs arrived:
>
> 3 confirmed (2 low, 1 medium)
>
> #1669455 tempest run --regex '\[.*\bsmoke\b.*\]' 
> --blacklist-file=/home/jenkins/tempest/skip.txt
>  not able to run smoke tests
> 
>
> #1668690 Tempest test_server_actions missing evacuate server test (and
> client) 
>
> #1668407 os-assisted-volume-snapshots client is not available
> 
>
> 1 in-progress
>
> #1667354 Volume v2 capabilities & scheduler-stats service clients makes
> v1 APIs call 
>
> 2 incomplete
>
> #1667443 API endpoints become
> inaccessible after tempest run
> 
>
> #1671256 test_get_volume_absolute_limits
> fails with no admin credentials
> 
>
> 1 reviewing – not sure if it is a valid bug or working as
> designed – Related to resource_cleanup vs teardown (clean after each test
> case).
>
> #1670693 Tests in
> tempest.api.network.test_routers.RoutersTest don't clean up network
> resources 
>
>
>
> Please let me know if comments or questions
>
>
>
>
>
> *From: *Ghanshyam Mann <ghanshyamm...@gmail.com>
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" <openstack-dev@lists.openstack.org>
> *Date: *Wednesday, March 8, 2017 at 1:09 AM
> *To: *"openstack-dev@lists.openstack.org" <openstack-dev@lists.
> openstack.org>
> *Subject: *[openstack-dev] [QA] Meeting Thursday Mar 9th at 9:00 UTC
>
>
>
> Hello everyone,
>
> Please reminder that the weekly OpenStack QA team IRC meeting will be
> Thursday, Mar 9th at 9:00 UTC in the #openstack-meeting channel.
>
> The agenda for the meeting can be found here:
>
> https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#
> Agenda_for_March_9th_2017_.280900_UTC.29
>
> Anyone is welcome to add an item to the agenda.
>
> -gmann
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][glance][keystone][cinder] Testing of deprecated API versions

2017-03-13 Thread Ghanshyam Mann
On Sat, Mar 11, 2017 at 12:10 AM, Andrea Frittoli  wrote:

>
>
> On Fri, Mar 10, 2017 at 2:49 PM Andrea Frittoli 
> wrote:
>
>> On Fri, Mar 10, 2017 at 2:24 PM Doug Hellmann 
>> wrote:
>>
>> Excerpts from Ghanshyam Mann's message of 2017-03-10 10:55:25 +0900:
>> > On Fri, Mar 10, 2017 at 7:23 AM, Lance Bragstad 
>> wrote:
>> >
>> > >
>> > >
>> > > On Thu, Mar 9, 2017 at 3:46 PM, Doug Hellmann 
>> > > wrote:
>> > >
>> > >> Excerpts from Andrea Frittoli's message of 2017-03-09 20:53:54 +:
>> > >> > Hi folks,
>> > >> >
>> > >> > I'm trying to figure out what's the best approach to fade out
>> testing of
>> > >> > deprecated API versions.
>> > >> > We currently host in Tempest API tests for Glance API v1, Keystone
>> API
>> > >> v2
>> > >> > and Cinder API v1.
>> > >> >
>> > >> > According to the guidelines for the "follow-standard-deprecation"
>> tag
>> > >> [0],
>> > >> > when projects that have that tag deprecate a feature:
>> > >> >
>> > >> > "Code will be frozen and only receive minimal maintenance (just so
>> that
>> > >> it
>> > >> > continues to work as-is)."
>> > >> >
>> > >> > I interpret this so that projects should maintain some level of
>> testing
>> > >> of
>> > >> > the deprecated feature, including a deprecated API version.
>> > >> > The QA team does not see value in testing deprecated API versions
>> in the
>> > >> > common gate jobs, so my question is what do to with those tests.
>> > >> >
>> > >> > One option is to maintain them in Tempest until the API version is
>> > >> removed,
>> > >> > and run them in dedicated project jobs.
>> > >> > This means that tempest would have to run those jobs as well, so
>> three
>> > >> > extra jobs, until the API version is removed.
>> > >> >
>> > >> > The other option is to move those tests out of Tempest, into the
>> > >> projects.
>> > >> > This would imply back porting them to all relevant branches as
>> well,
>> > >> but it
>> > >> > would have the advantage of decoupling them from Tempest. It
>> should be
>> > >> no
>> > >> > concern from an API stability POV since the code for that API will
>> be
>> > >> > frozen.
>> > >> > Tests for deprecated APIs in cinder, keystone and glance are all -
>> as
>> > >> far
>> > >> > as I can tell - removed or deprecated from interoperability
>> guidelines,
>> > >> so
>> > >> > moving the tests out of Tempest would not be an issue in that
>> sense.
>> > >> >
>> > >> > The 2nd option involves a bit more initial overhead for the
>> removal of
>> > >> > tests, but I think it would works for the best on the long term.
>> > >> >
>> > >> > There is a 3rd option as well, which is to stop running integration
>> > >> testing
>> > >> > on deprecated API versions before they are actually removed, but I
>> feel
>> > >> > that would not meet the criteria defined by the
>> > >> follow-standard-deprecation
>> > >> > tag.
>> > >> >
>> > >> > Thoughts?
>> > >> >
>> > >> > andrea
>> > >>
>> > >> Are any of those tests used by the interoperability working group
>> > >> (formerly DefCore)?
>> > >>
>> > >>
>> > > That's a good question. I was very curious about this because last I
>> > > checked keystone had v2.0 calls required for defcore. Looks like that
>> might
>> > > not be the case anymore [0]? I started a similar thread to this after
>> the
>> > > PTG since that was something our group talked about extensively
>> during the
>> > > deprecation session [1].
>> > >
>> > > ​Yes, it seems no Volume v1 and Keystone v2 tests usage in defcore
>> > 2017.01.json [0]​. But there are some compute tests which internally use
>> > glance v1 API call [2]. But on mentioned flagged action- Nova already
>> moved
>> > to v2 APIs and tempest part is pending which can be fixed to make call
>> on
>> > v2 APIs only (which can be part of this work and quick).
>> >
>> > ​From options about deprecated APIs testing, I am with options 2 which
>> > really take out the load of Tempest tests maintenance and gate.   ​
>> >
>> > ​But another question is about stable branch testing of those API, like
>> > glance v1 and identity v2 APIs are supported (not deprecated) in Mitaka.
>> > ​As Tempest is responsible of testing all stable branch behavior too,
>> Should
>> >  we keep testing them till all Mitaka EOL (till APIs are in deprecated
>> > state in all stable branch) ?​
>>
>> Excellent point.
>>
>>
>> As far as I can tell:
>> - Cinder v1 if I'm not mistaken has been deprecated in Juno, so it's
>> deprecated in all supported releases.
>> - Glance v1 has been deprecated in Newton, so it's deprecated in all
>> supported releases
>>
>
> Of course Glance v1 still has to run on the stable/newton gate jobs, until
> Newton EOL (TBD), so tests will stay in Tempest for a cycle more at least.
> I guess I shouldn't be sending emails on a Friday afternoon?
>

​humm, Till Mitaka right? Newton version of glance is with v1 API as
deprecated. And Mitaka is 

Re: [openstack-dev] [api][qa][tc][nova][cinder] Testing of a microversioned world

2017-03-13 Thread Ghanshyam Mann
On Sun, Mar 12, 2017 at 7:40 AM, Matt Riedemann  wrote:

> On 3/10/2017 3:02 PM, Andrea Frittoli wrote:
>
>>
>> We had a couple of sessions related to this topic at the PTG [0][1].
>>
>> We agreed that we want to still maintain integration tests only in
>> Tempest, which means that API micro versions that have no integration
>> impact can be tested via functional tests.
>>
>
> To be clear, "integration" here means tests that span operations across
> multiple services, correct? Like a compute test that first creates a port
> in the networking service and a volume in the block storage service and
> then uses those to create a server and maybe take a snapshot of it which is
> then verified was uploaded to the image service.
>
> The non-integration things are self-contained in a single service, like if
> all you need to do is create an aggregate, show it's details and validate
> the response, at a particular microversion, we can just do that in nova
> functional tests, and it's not necessary in Tempest.
>
> It might be worth having a definition of this policy in the Tempest docs
> so when people ask this question again you can just point at the docs.
>
>
​Yes, that will be helpful to have those agreement in doc​. I am adding
those in existing microversion testing doc [0] and later will be
refactoring to give them better and more visible shape.

Tempest maintain the info about what microversion tests are is implemented
on Tempest side which will be helpful for projects to check the coverage
[1]. Cinder one was missed which I added in [2].

​As summary:
Tempest Scope:
- Only Integration tests for microversion is allowed in Tempest
- Exception for non integration tests to fill the schema gap if exists.

Project Scope:
- Remaining tests coverage of microversion​ should be on Projects side.
  - Project can cover those as functional tests etc.
  - Nova currently have at least one functional tests per each
microversion and plan to extend coverage like [3]
  - IMO, only running tests with 'latest' microversion is not enough,
each microversion should be tested with positive and negative testing
(w.r.t at least immediate previous microversion).



>
>> In terms of which versions we test in the gate, for nova we always run
>> with min_microversion = None and max_microversion = latest, which means
>> that all tests will be executed.
>> Since micro versions are incremental, and each micro version usually
>> involves no or one test on Tempest side, I think it will be a while
>> before this becomes an issue for the common gate.
>>
>
> We test max_microversion=latest only on master. On the devstack stable
> branch we cap the max_microversion, e.g.:
>
> https://github.com/openstack-dev/devstack/blob/stable/newton
> /lib/tempest#L339
>
> --
>
> Thanks,
>
> Matt
>
>


..0 https://review.openstack.org/#/c/444727/1
..1
https://docs.openstack.org/developer/tempest/microversion_testing.html#microversion-tests-implemented-in-tempest

..2 https://review.openstack.org/#/c/444711/ ​
​..3
https://blueprints.launchpad.net/nova/+spec/nova-microversion-functional-tests
 ​

​​Thanks
gmann​



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >