Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Clint Byrum
Excerpts from Christopher Aedo's message of 2017-03-10 19:30:18 -0800:
> On Fri, Mar 10, 2017 at 6:20 PM, Clint Byrum  wrote:
> > Excerpts from Fox, Kevin M's message of 2017-03-10 23:45:06 +:
> >> So, this is the kind of thinking I'm talking about... OpenStack today is
> >> more then just IaaS in the tent. Trove (DBaaS), Sahara (Hadoop,Spark,etc
> >> aaS), Zaqar (Messaging aaS) and many more services. But they seem to be
> >> treated as second class citizens, as they are not "IaaS".
> >>
> >
> > It's not that they're second class citizens. It's that their community
> > is smaller by count of users, operators, and developers. This should not
> > come as a surprise, because the lowest common denominator in any user
> > base will always receive more attention.
> >
> >> > Why should it strive to be anything except an excellent building block
> >> for other technologies?
> >>
> >> You misinterpret my statement. I'm in full agreement with you. The
> >> above services should be excellent building blocks too, but are suffering
> >> from lack of support from the IaaS layer. They deserve the ability to
> >> be excellent too, but need support/vision from the greater community
> >> that hasn't been forthcoming.
> >>
> >
> > You say it like there's some over arching plan to suppress parts of the
> > community and there's a pack of disgruntled developers who just can't
> > seem to get OpenStack to work for Trove/Sahara/AppCatalog/etc.
> >
> > We all have different reasons for contributing in the way we do.  Clearly,
> > not as many people contribute to the Trove story as do the pure VM-on-nova
> > story.
> >
> >> I agree with you, we should embrace the container folks and not treat
> >> them as separate. I think thats critical if we want to allow things
> >> like Sahara or Trove to really fulfil their potential. This is the path
> >> towards being an OpenSource AWS competitor, not just for being able to
> >> request vm's in a cloudy way.
> >>
> >> I think that looks something like:
> >> OpenStack Advanced Service (trove, sahara, etc) -> Kubernetes ->
> >> Nova VM or Ironic Bare Metal.
> >>
> >
> > That's a great idea. However, AFAICT, Nova is _not_ standing in Trove,
> > Sahara, or anyone else's way from doing this. Seriously, try it. I'm sure
> > it will work.  And in so doing, you will undoubtedly run into friction
> > from the APIs. But unless you can describe that _now_, you have to go try
> > it and tell us what broke first. And then you can likely submit feature
> > work to nova/neutron/cinder to make it better. I don't see anything in
> > the current trajectory of OpenStack that makes this hard. Why not just do
> > it? The way you ask, it's like you have a team of developers just sitting
> > around shaving yaks waiting for an important OpenStack development task.
> >
> > The real question is why aren't Murano, Trove and Sahara in most current
> > deployments? My guess is that it's because most of our current users
> > don't feel they need it. Until they do, Trove and Sahara will not be
> > priorities. If you want them to be priorities _pay somebody to make them
> > a priority_.
> 
> This particular point really caught my attention.  You imply that
> these additional services are not widely deployed because _users_
> don't want them.  The fact is most users are completely unaware of
> them because these services require the operator of the cloud to
> support them.  In fact they often require the operator of the cloud to
> support them from the initial deployment, as these services (and
> *most* OpenStack services) are frighteningly difficult to add to an
> already deployed cloud without downtime and high risk of associated
> issues.
> 
> I think it's unfair to claim these services are unpopular because
> users aren't asking for them when it's likely users aren't even aware
> of them (do OVH, Vexxhost, Dreamhost, Raskspace or others provide a
> user-facing list of potential OpenStack services with a voting option?
>  Not that I've ever seen!)
> 
> I bring this up to point out how much more popular ALL of these
> services would be if the _users_ were able to enable them without
> requiring operator intervention and support.
> 
> Based on our current architecture, it's nearly impossible for a new
> project to be deployed on a cloud without cloud-level admin
> privileges.  Additionally almost none of the projects could even work
> this way (with Rally being a notable exception).  I guess I'm kicking
> this dead horse because for a long time I've argued we need to back
> away from the tightly coupled nature of all the projects, but
> (speaking of horses) it seems that horse is already out of the barn.
> (I really wish I could work in one more proverb dealing with horses
> but it's getting late on a Friday so I'll stop now.)
> 

I see your point, and believe it is valid.

However, we do have something like a voting system in the market
economy. The users may not say "I want Trove". But they may 

Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Christopher Aedo
On Fri, Mar 10, 2017 at 6:20 PM, Clint Byrum  wrote:
> Excerpts from Fox, Kevin M's message of 2017-03-10 23:45:06 +:
>> So, this is the kind of thinking I'm talking about... OpenStack today is
>> more then just IaaS in the tent. Trove (DBaaS), Sahara (Hadoop,Spark,etc
>> aaS), Zaqar (Messaging aaS) and many more services. But they seem to be
>> treated as second class citizens, as they are not "IaaS".
>>
>
> It's not that they're second class citizens. It's that their community
> is smaller by count of users, operators, and developers. This should not
> come as a surprise, because the lowest common denominator in any user
> base will always receive more attention.
>
>> > Why should it strive to be anything except an excellent building block
>> for other technologies?
>>
>> You misinterpret my statement. I'm in full agreement with you. The
>> above services should be excellent building blocks too, but are suffering
>> from lack of support from the IaaS layer. They deserve the ability to
>> be excellent too, but need support/vision from the greater community
>> that hasn't been forthcoming.
>>
>
> You say it like there's some over arching plan to suppress parts of the
> community and there's a pack of disgruntled developers who just can't
> seem to get OpenStack to work for Trove/Sahara/AppCatalog/etc.
>
> We all have different reasons for contributing in the way we do.  Clearly,
> not as many people contribute to the Trove story as do the pure VM-on-nova
> story.
>
>> I agree with you, we should embrace the container folks and not treat
>> them as separate. I think thats critical if we want to allow things
>> like Sahara or Trove to really fulfil their potential. This is the path
>> towards being an OpenSource AWS competitor, not just for being able to
>> request vm's in a cloudy way.
>>
>> I think that looks something like:
>> OpenStack Advanced Service (trove, sahara, etc) -> Kubernetes ->
>> Nova VM or Ironic Bare Metal.
>>
>
> That's a great idea. However, AFAICT, Nova is _not_ standing in Trove,
> Sahara, or anyone else's way from doing this. Seriously, try it. I'm sure
> it will work.  And in so doing, you will undoubtedly run into friction
> from the APIs. But unless you can describe that _now_, you have to go try
> it and tell us what broke first. And then you can likely submit feature
> work to nova/neutron/cinder to make it better. I don't see anything in
> the current trajectory of OpenStack that makes this hard. Why not just do
> it? The way you ask, it's like you have a team of developers just sitting
> around shaving yaks waiting for an important OpenStack development task.
>
> The real question is why aren't Murano, Trove and Sahara in most current
> deployments? My guess is that it's because most of our current users
> don't feel they need it. Until they do, Trove and Sahara will not be
> priorities. If you want them to be priorities _pay somebody to make them
> a priority_.

This particular point really caught my attention.  You imply that
these additional services are not widely deployed because _users_
don't want them.  The fact is most users are completely unaware of
them because these services require the operator of the cloud to
support them.  In fact they often require the operator of the cloud to
support them from the initial deployment, as these services (and
*most* OpenStack services) are frighteningly difficult to add to an
already deployed cloud without downtime and high risk of associated
issues.

I think it's unfair to claim these services are unpopular because
users aren't asking for them when it's likely users aren't even aware
of them (do OVH, Vexxhost, Dreamhost, Raskspace or others provide a
user-facing list of potential OpenStack services with a voting option?
 Not that I've ever seen!)

I bring this up to point out how much more popular ALL of these
services would be if the _users_ were able to enable them without
requiring operator intervention and support.

Based on our current architecture, it's nearly impossible for a new
project to be deployed on a cloud without cloud-level admin
privileges.  Additionally almost none of the projects could even work
this way (with Rally being a notable exception).  I guess I'm kicking
this dead horse because for a long time I've argued we need to back
away from the tightly coupled nature of all the projects, but
(speaking of horses) it seems that horse is already out of the barn.
(I really wish I could work in one more proverb dealing with horses
but it's getting late on a Friday so I'll stop now.)

-Christopher


>> Not what we have today:
>> OpenStack Advanced Service -> Nova VM or Ironic Bare Metal
>>
>> due to the focus on the api's of VM's being only for IaaS and not for
>> actually running cloud software on.
>>
>
> The above is an unfounded and unsupported claim. What _exactly_ would
> you want Nova/Neutron/Cinder's API to do differently to support "cloud
> software" better? Why is everyone dodging that 

Re: [openstack-dev] [devstack] A few questions on configuring DevStack for Neutron

2017-03-10 Thread Christopher Aedo
On Thu, Oct 8, 2015 at 5:41 PM, Monty Taylor  wrote:
> On 10/08/2015 07:13 PM, Christopher Aedo wrote:
>>
>> On Thu, Oct 8, 2015 at 9:38 AM, Sean M. Collins 
>> wrote:
>>>
>>> Please see my response here:
>>>
>>>
>>> http://lists.openstack.org/pipermail/openstack-dev/2015-October/076251.html
>>>
>>> In the future, do not create multiple threads since responses will get
>>> lost
>>
>>
>> Yep, thank you Sean - saw your response yesterday and was going to
>> follow-up this thread with a "please ignore" and a link to the other
>> thread.  I opted not to in hopes of reducing the noise but I think
>> your note here is correct and will close the loop for anyone who
>> happens across only this thread.
>>
>> (Secretly though I hope this thread somehow becomes never-ending like
>> the "don't -1 for a long commit message" thread!)
>
>
> (I think punctuation should go outside the parenthetical)?

Apologies for the late reply to this Monty but a response is required
considering the importance of proper punctuation as regards
parenthesis.

According to the Internet[1] (which is widely regarded as the
authority in such matters), the original usage of punctuation in this
case is the appropriate usage when the entire sentence is inside the
parenthesis.

Hopefully this diversion from the topic does not warrant a new thread.

[1]: http://www.grammarbook.com/punctuation/parens.asp

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2017-03-10 19:09:42 -0500:
> On 10/03/17 12:27, Monty Taylor wrote:
> > On 03/10/2017 10:59 AM, Clint Byrum wrote:
> You may be familiar with the Kuryr project, which integrates Kubernetes 
> deployments made by Magnum with Neutron networking so that other Nova 
> servers can talk directly to the containers and other fun stuff. IMHO 
> it's exactly the kind of thing OpenStack should be doing to make users' 
> lives better, and give a compelling reason to install k8s on top of 
> OpenStack instead of on bare metal.
> 
> So here's a fun thing I learned at the PTG: according to the Magnum 
> folks, the main thing preventing them from fully adopting Kuryr is that 
> the k8s application servers, provisioned with Nova, need to make API 
> calls to Neutron to set up the ports as containers move around. And 
> there's no secure way to give Keystone authentication credentials to an 
> application server to do what it needs - and, especially, to do *only* 
> what it needs.
> 
> http://lists.openstack.org/pipermail/openstack-dev/2016-October/105304.html
> 
> Keystone devs did agree in back in Austin that when they rejigger the 
> default policy files it will done in such a way as to make the 
> authorisation component of this feasible (by requiring a specific 
> reader/writer role, not just membership of the project, to access APIs), 
> but that change hasn't happened yet AFAIK. I suspect that it isn't their 
> top priority. Kevin has been campaigning for *years* to get Nova to 
> provide a secure way to inject credentials into a server in the same way 
> that this is built in to EC2, GCE and (I assume but haven't checked) 
> Azure. And they turned him down flat every time saying that this was not 
> Nova's problem.
> 
> Sorry, but if OpenStack isn't a good, secure platform for running 
> Kubernetes on then that is a HAIR ON FIRE-level *existential* problem in 
> 2017.
> 

You're very right on this. And kuryr not working the way it should is _a
disaster_. I was hoping it had worked itself out by now.

I've argued against certain aspects of instance-users in the past (and
for others). I think it might be worth it to take another shot at
actually writing up a spec again.

In the mean time, we could always have shade inject instance-user
creds... 

> We can't place too much blame on individual projects though, because I 
> believe the main reason this doesn't Just Work already is that there has 
> been an unspoken consensus that we needed to listen to users like you 
> but not to users like Kevin, and the elected leaders of our community 
> have done nothing to either disavow or officially adopt that consensus. 
> We _urgently_ need to decide if that's what we actually want and make 
> sure it is prominently documented so that both users and developers know 
> what's what.
> 
> FWIW I'm certain you must have hit this same issue in infra - probably 
> you were able to use pre-signed Swift URLs when uploading log files to 
> avoid needing credentials on servers allocated by nodepool? That's 
> great, but not every API in OpenStack has a pre-signed URL facility, and 
> nor should they have to.
> 

Indeed, that's how infra nodes store logs in swift.

> (BTW I proposed a workaround for Magnum/Kuryr at the PTG by using a 
> pre-signed Zaqar URL with a subscription triggering a Mistral workflow, 
> and I've started working on a POC.)
> 

What triggers the boot to kick over the bucket full of golf balls
though?

> > Considering computers as some-how inherently 'better' or 'worse' than
> > some of the 'cloud-native' concepts is hog wash. Different users have
> > different needs. As Clint points out - kubernetes needs to run
> > _somewhere_. CloudFoundry needs to run _somewhere_. So those are at
> > least two other potential users who are not me and my collection of
> > things I want to run that want to run in computers.
> 
> I think we might be starting to talk about different ideas. The whole 
> VMs vs. containers fight _is_ hogwash. You're right to call it out as 
> such. We hear far too much about it, and it's totally unfair to the 
> folks who work on the VM side. But that isn't what this discussion is about.
> 
> Google has done everyone a minor disservice by appropriating the term 
> "cloud-native" and using it in a context such that it's effectively been 
> redefined to mean "containers instead of VMs". I've personally stopped 
> using the term because it's more likely to generate confusion than clarity.
> 
> What "cloud-native" used to mean to me was an application that knows 
> it's running in the cloud, and uses the cloud's APIs. As opposed to 
> applications that could just as easily be running in a VPS or on bare 
> metal, but happen to be running in a VM provisioned by Nova.
> 

+1 for "apps that know they're in the cloud", and further apps that know
how to talk to their cloud.

And also +1 for listening to folks who want a little more help in
interacting with their cloud from 

Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2017-03-10 23:45:06 +:
> So, this is the kind of thinking I'm talking about... OpenStack today is
> more then just IaaS in the tent. Trove (DBaaS), Sahara (Hadoop,Spark,etc
> aaS), Zaqar (Messaging aaS) and many more services. But they seem to be
> treated as second class citizens, as they are not "IaaS".
> 

It's not that they're second class citizens. It's that their community
is smaller by count of users, operators, and developers. This should not
come as a surprise, because the lowest common denominator in any user
base will always receive more attention.

> > Why should it strive to be anything except an excellent building block
> for other technologies?
> 
> You misinterpret my statement. I'm in full agreement with you. The
> above services should be excellent building blocks too, but are suffering
> from lack of support from the IaaS layer. They deserve the ability to
> be excellent too, but need support/vision from the greater community
> that hasn't been forthcoming.
> 

You say it like there's some over arching plan to suppress parts of the
community and there's a pack of disgruntled developers who just can't
seem to get OpenStack to work for Trove/Sahara/AppCatalog/etc.

We all have different reasons for contributing in the way we do.  Clearly,
not as many people contribute to the Trove story as do the pure VM-on-nova
story.

> I agree with you, we should embrace the container folks and not treat
> them as separate. I think thats critical if we want to allow things
> like Sahara or Trove to really fulfil their potential. This is the path
> towards being an OpenSource AWS competitor, not just for being able to
> request vm's in a cloudy way.
> 
> I think that looks something like:
> OpenStack Advanced Service (trove, sahara, etc) -> Kubernetes ->
> Nova VM or Ironic Bare Metal.
> 

That's a great idea. However, AFAICT, Nova is _not_ standing in Trove,
Sahara, or anyone else's way from doing this. Seriously, try it. I'm sure
it will work.  And in so doing, you will undoubtedly run into friction
from the APIs. But unless you can describe that _now_, you have to go try
it and tell us what broke first. And then you can likely submit feature
work to nova/neutron/cinder to make it better. I don't see anything in
the current trajectory of OpenStack that makes this hard. Why not just do
it? The way you ask, it's like you have a team of developers just sitting
around shaving yaks waiting for an important OpenStack development task.

The real question is why aren't Murano, Trove and Sahara in most current
deployments? My guess is that it's because most of our current users
don't feel they need it. Until they do, Trove and Sahara will not be
priorities. If you want them to be priorities _pay somebody to make them
a priority_.

> Not what we have today:
> OpenStack Advanced Service -> Nova VM or Ironic Bare Metal
> 
> due to the focus on the api's of VM's being only for IaaS and not for
> actually running cloud software on.
> 

The above is an unfounded and unsupported claim. What _exactly_ would
you want Nova/Neutron/Cinder's API to do differently to support "cloud
software" better? Why is everyone dodging that question? Name one
specific change and how it would actually be consumed.

I personally think it's just the general frustration that comes at
this stage of maturity of a project. There's a contraction of hype,
lower investment in general, and everyone is suppressing their panic
and trying to reason about what we can do to make it better, but nobody
actually knows. Meanwhile, our users and operators need us to keep
making things better. Some are even _paying us_ to make things better.

> I'm sorry if that sounds a bit confusing. Its hard to
> explain. I can try and elaborate if you want. Zane's
> posting here does help explain it a little: >
> http://www.zerobanana.com/archive/2015/04/24#a-vision-for-openstack
> 

Honest, I don't understand the reality that Zane's vision is driving at in
that post. More Zaqar? Why not just do that? What's standing in the way?

> The other alternative is to clearly define OpenStack to be just IaaS,
> and kick all the non IaaS stuff out of the tent. (I do not prefer
> this). It will hurt in the short term but long tern could be better
> for those projects then the current status quo as new opportunities for
> switching base dependencies will present themselves and no longer will
> be hamstrung by those that feel their use case isn't important or think
> that the existing api's are "good enough" to handle the use case.
> 

Why would we kick out perfectly healthy pieces of software that want
to be OpenStack-native? Nobody is saying that IaaS clouds aren't well
complimented by native higher level projects. But in the app catalog
case, there's little consumption, and waning contribution, so it's worth
considering whether its continued maintenance and existence is a drain
on the overall community. Sounds like it is, and we'll 

Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Zane Bitter

On 10/03/17 14:30, Jay Pipes wrote:

On 03/10/2017 01:37 PM, Zane Bitter wrote:

On 09/03/17 23:57, Renat Akhmerov wrote:

And this whole discussion is taking me to the question: is there really
any officially accepted strategy for OpenStack for 1, 3, 5 years? Is
there any ultimate community goal we’re moving to regardless of
underlying technologies (containers, virtualization etc.)? I know we’re
now considering various community goals like transition to Python 3.5
etc. but these goals don’t tell anything about our future as an IT
ecosystem from user perspective. I may assume that I’m just not aware of
it. I’d be glad if it was true. I’m eager to know the answers for these
questions.


Me too! There was one effort started by John Garbutt in December, a
technical vision for OpenStack in the form of a TC resolution in the
governance repo:

https://review.openstack.org/#/c/401226/

I wasn't a fan of the draft content, but I believe his intention was to
seed it with a formalised version of our de-facto historical position
(tl;dr legacy apps from the 90s). As long as that's a starting point for
the discussion and not a conclusion then I think this was a valuable
contribution. I commented with a non-exhaustive list of things that I
would expect to see at least debated in a vision for a cloud computing
platform, which I will reproduce here since it's relevant to this thread:

* Infinite scaling - the ability in principle to scale from zero to an
arbitrarily large number of users without rewriting your application
(e.g. if your application can store one file in Swift then there's no
theoretical limit to how many it can store. c.f. Cinder where at some
point you'd have to start juggling multiple volumes.)
* Granularity of allocation - pay only for the resources you actually
use, rather than to allocate a chunk that you may or may not be using
(so Nova->containers->FaaS, Cinder->Swift, Trove->??? [RIP MagnetoDB],
)
* Full control of infrastructure - notwithstanding the above, maintain
Nova/Cinder/Neutron/Trove/ so that legacy applications, highly
specialised applications, and higher-level services like PaaS can make
fully customised use of the virtual infrastructure.
* Hardware virtualisation - make anything that might typically be done
in hardware available in a multi-tenant software-defined environment:
servers, routers, load balancers, firewalls, video codecs, GPGPUs,
FPGAs...
* Built-in reliability - don't require even the smallest apps to have 3
VMs + a cluster manager to enforce any reliability guarantees; provide
those guarantees using multi-tenant services that efficiently share
resources between applications (see also: Infinite scaling, Granularity
of allocation).
* Application control - (securely) give applications control over their
own infrastructure, so that no part of the application needs to reside
outside of the cloud.
* Integration - cloud services that effectively form part of the user's
application can communicate amongst themselves, where appropriate,
without the need for client-side glue (see also: Built-in reliability).
* Interoperability - the same applications can be deployed on a variety
of private and public OpenStack clouds.


Those are all interesting technical concepts to think about and discuss.
However, what Kevin said originally in his response about the OpenStack
community needing to decide what exactly it *is* and what scope
OpenStack should pursue, is the real foundational question that needs to
be addressed. Until it is, none of the rest of these topics have much
contextual relevance and are just a distraction, IMHO.


Hey Jay,
I respect your opinion here, and I agree that's the question we need to 
answer. TBH I'm happy any way we can get to an answer.


I suspect however, that approaching it directly will not prove feasible 
due to a lack of shared terminology. When everyone has a different idea 
in their head (and AFAICT they do) of what "IaaS" means, discussions 
devolve quite quickly into meaninglessness.


It seems to me that having the conversation at a different level of 
abstraction, that of the technical principles we think the solution 
should embody, could potentially be more productive. I believe it's 
fairly easy to judge what is in scope (including for services we haven't 
imagined yet) by referring back to principles at this level of 
abstraction if we can agree on them first.


I'm open to other approaches though.

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][nova][cinder] Testing of a microversioned world

2017-03-10 Thread Ken'ichi Ohmichi
2017-03-10 13:32 GMT-08:00 Matthew Treinish :
> On Fri, Mar 10, 2017 at 12:34:31PM -0800, Ken'ichi Ohmichi wrote:
>> Hi John,
>>
>> Now Tempest is testing microversions only for Nova and contains some
>> testing framework for re-using for another projects.
>> On this framework, we can implement necessary microversions tests as
>> we want and actually many microversions of Nova are not tested by
>> Tempest.
>> We can see the tested microversion of Nova on
>> https://github.com/openstack/tempest/blob/master/doc/source/microversion_testing.rst#microversion-tests-implemented-in-tempest
>>
>> Before implementing microversion testing for Cinder, we will implement
>> JSON-Schema validation for API responses for Cinder.
>> The validation will be helpful for testing base microversion of Cinder
>> API and we will be able to implement the microversion tests based on
>> that.
>> This implementation is marked as 7th priority in this Pike cycle as
>> https://etherpad.openstack.org/p/pike-qa-priorities
>>
>> In addition, now Cinder V3 API is not tested. So we are going to
>> enable v3 tests with some restructure of Tempest in this cycle.
>> The detail is described on the part of "Volume API" of
>> https://etherpad.openstack.org/p/tempest-api-versions-in-pike
>
>
> Umm, I don't know what you're basing that on, but there have been cinder v3
> tests and cinder microversion support in Tempest since Newton. It was 
> initially
> added in this patch: https://review.openstack.org/#/c/300639/

Yeah, that is for v3 but that is only I think at this time.

>> 2017-03-10 11:37 GMT-08:00 John Griffith :
>> > Hey Everyone,
>> >
>> > So along the lines of an earlier thread that went out regarding testing of
>> > deprecated API's and Tempest etc [1].
>> >
>> > Now that micro-versions are *the API versioning scheme to rule them all* 
>> > one
>> > question I've not been able to find an answer for is what we're going to
>> > promise here for support and testing.  My understanding thus far is that 
>> > the
>> > "community" approach here is "nothing is ever deprecated, and everything is
>> > supported forever".
>> >
>> > That's sort of a tall order IMO, but ok.  I've already had some questions
>> > from folks about implementing an explicit Tempest test for every
>> > micro-versioned implementation of an API call also.  My response has been
>> > "nahh, just always test latest available".  This kinda means that we're not
>> > testing/supporting the previous versions as promised though.
>> >
>> > Anyway; I'm certain that between Nova and the API-WG this has come up and 
>> > is
>> > probably addressed, just wondering if somebody can point me to some
>> > documentation or policies in this respect.
>> >
>> > Thanks,
>> > John
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Zane Bitter

On 10/03/17 17:43, Clint Byrum wrote:

Excerpts from Fox, Kevin M's message of 2017-03-10 20:47:55 +:

Just because the market hasn't immediately shifted from IaaS to containers doesn't mean it won't 
happen eventually, and that google's wrong in their push for containers over IaaS. It took a long 
time (still ongoing) to move from physical machines to vm's. The same parallel will happen with 
containers I believe. We're at least a few years out from it becoming more common place. That 
doesn't mean folks can assume it will always be the way it is, and the "market has 
spoken". "The only constants are death and taxes." :)

Your right in the assertion k8s needs a place to run, but that doesn't 
necessarily mean OpenStack, unless we help integrate it and make it a great 
place to run it.

I'm glad the IaaS stuff in OpenStack suits your needs. Thats great. It hasn't 
served a great number of users needs though, and has been at least misleading 
folks into believing it will for a number of years. If we want it to just be an 
IaaS, lets free up those users to move on elsewhere.

Does OpenStack intend just to be an IaaS in a few years or something else?


I'm not a fan of describing the more limited use case as "just IaaS". 
It's all infrastructure; the question for me is whom it's providing a 
service to: the application running on that infrastructure, or only the 
(human) operator of that application?



Why should it strive to be anything except an excellent building block
for other technologies?


Wrong question IMO. The right question is, what would make it an 
excellent building block?


We need to set the bar higher than "same as bare metal ('apps will work 
just like any other places') but you also get to manage an OpenStack 
deployment too".


See my reply to Monty for details on some ways that OpenStack isn't as 
excellent a building block as it ought to be.


By happy coincidence, the same features that would make OpenStack a 
better building block for "other technologies" are in many cases also 
what would make it a better place to run other applications of the type 
that Kevin's users want to write. Since the "other technologies" are 
themselves applications from OpenStack's perspective.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Zane Bitter

On 10/03/17 12:27, Monty Taylor wrote:

On 03/10/2017 10:59 AM, Clint Byrum wrote:

I'm curious what you (Josh) or Zane would change too.
Containers/apps/kubes/etc. have to run on computers with storage and
networks. OpenStack provides a pretty rich set of features for giving
users computers with storage on networks, and operators a way to manage
those. So I fail to see how that is svn to "cloud native"'s git. It
seems more foundational and complimentary.


I agree with this strongly.

I get frustrated really quickly when people tell me that, as a user, I
_should_ want something different than what I actually do want.

It turns out, I _do_ want computers - or at least things that behave
like computers - for some percentage of my workloads. I'm not the only
one out there who wants that.

There are other humans out there who do not want computers. They don't
like computers at all. They want generic execution contexts into which
they can put their apps.

It's just as silly to tell those people that they _should_ want
computers as it is to tell me that I _should_ want a generic execution
context.


I totally agree with you. There is room for all kinds of applications on 
OpenStack, and the great thing about an open community is that they can 
all have a voice. In theory.



One of the wonderful things about the situation we're in now is that if
you stack a k8s on top of an OpenStack then you empower the USER to
decide what their workload is and which types of features it is - rather
than forcing a "cloud native" vision dreamed up by "thought leaders"
down everyone's throats.


I'd go even further than that - many workloads are likely a mix of 
_both_, and we need to empower users to be able to use the right tools 
for the right parts of the job and _integrate_ them together. That's 
where OpenStack can add huge value to k8s and the like.


You may be familiar with the Kuryr project, which integrates Kubernetes 
deployments made by Magnum with Neutron networking so that other Nova 
servers can talk directly to the containers and other fun stuff. IMHO 
it's exactly the kind of thing OpenStack should be doing to make users' 
lives better, and give a compelling reason to install k8s on top of 
OpenStack instead of on bare metal.


So here's a fun thing I learned at the PTG: according to the Magnum 
folks, the main thing preventing them from fully adopting Kuryr is that 
the k8s application servers, provisioned with Nova, need to make API 
calls to Neutron to set up the ports as containers move around. And 
there's no secure way to give Keystone authentication credentials to an 
application server to do what it needs - and, especially, to do *only* 
what it needs.


http://lists.openstack.org/pipermail/openstack-dev/2016-October/105304.html

Keystone devs did agree in back in Austin that when they rejigger the 
default policy files it will done in such a way as to make the 
authorisation component of this feasible (by requiring a specific 
reader/writer role, not just membership of the project, to access APIs), 
but that change hasn't happened yet AFAIK. I suspect that it isn't their 
top priority. Kevin has been campaigning for *years* to get Nova to 
provide a secure way to inject credentials into a server in the same way 
that this is built in to EC2, GCE and (I assume but haven't checked) 
Azure. And they turned him down flat every time saying that this was not 
Nova's problem.


Sorry, but if OpenStack isn't a good, secure platform for running 
Kubernetes on then that is a HAIR ON FIRE-level *existential* problem in 
2017.


We can't place too much blame on individual projects though, because I 
believe the main reason this doesn't Just Work already is that there has 
been an unspoken consensus that we needed to listen to users like you 
but not to users like Kevin, and the elected leaders of our community 
have done nothing to either disavow or officially adopt that consensus. 
We _urgently_ need to decide if that's what we actually want and make 
sure it is prominently documented so that both users and developers know 
what's what.


FWIW I'm certain you must have hit this same issue in infra - probably 
you were able to use pre-signed Swift URLs when uploading log files to 
avoid needing credentials on servers allocated by nodepool? That's 
great, but not every API in OpenStack has a pre-signed URL facility, and 
nor should they have to.


(BTW I proposed a workaround for Magnum/Kuryr at the PTG by using a 
pre-signed Zaqar URL with a subscription triggering a Mistral workflow, 
and I've started working on a POC.)



It also turns out that the market agrees. Google App Engine was not
successful, until Google added IaaS. Azure was not successful until
Microsoft added IaaS. Amazon, who have a container service and a
serverless service is all built around the ecosystem that is centered on
... that's right ... an IaaS.


I think this is the point where you're supposed to insert the disclaimer 
that, in any 

Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Fox, Kevin M
So, this is the kind of thinking I'm talking about... OpenStack today is more 
then just IaaS in the tent. Trove (DBaaS), Sahara (Hadoop,Spark,etc aaS), Zaqar 
(Messaging aaS) and many more services. But they seem to be treated as second 
class citizens, as they are not "IaaS".

> Why should it strive to be anything except an excellent building block
for other technologies?

You misinterpret my statement. I'm in full agreement with you. The above 
services should be excellent building blocks too, but are suffering from lack 
of support from the IaaS layer. They deserve the ability to be excellent too, 
but need support/vision from the greater community that hasn't been forthcoming.

I agree with you, we should embrace the container folks and not treat them as 
separate. I think thats critical if we want to allow things like Sahara or 
Trove to really fulfil their potential. This is the path towards being an 
OpenSource AWS competitor, not just for being able to request vm's in a cloudy 
way.

I think that looks something like:
OpenStack Advanced Service (trove, sahara, etc) -> Kubernetes -> Nova VM or 
Ironic Bare Metal.

Not what we have today:
OpenStack Advanced Service -> Nova VM or Ironic Bare Metal

due to the focus on the api's of VM's being only for IaaS and not for actually 
running cloud software on.

I'm sorry if that sounds a bit confusing. Its hard to explain. I can try and 
elaborate if you want. Zane's posting here does help explain it a little:
http://www.zerobanana.com/archive/2015/04/24#a-vision-for-openstack

The other alternative is to clearly define OpenStack to be just IaaS, and kick 
all the non IaaS stuff out of the tent. (I do not prefer this). It will hurt in 
the short term but long tern could be better for those projects then the 
current status quo as new opportunities for switching base dependencies will 
present themselves and no longer will be hamstrung by those that feel their use 
case isn't important or think that the existing api's are "good enough" to 
handle the use case.

Thanks,
Kevin
 

From: Clint Byrum [cl...@fewbar.com]
Sent: Friday, March 10, 2017 2:43 PM
To: openstack-dev
Subject: Re: [openstack-dev] [tc][appcat] The future of the App Catalog

Excerpts from Fox, Kevin M's message of 2017-03-10 20:47:55 +:
> Just because the market hasn't immediately shifted from IaaS to containers 
> doesn't mean it won't happen eventually, and that google's wrong in their 
> push for containers over IaaS. It took a long time (still ongoing) to move 
> from physical machines to vm's. The same parallel will happen with containers 
> I believe. We're at least a few years out from it becoming more common place. 
> That doesn't mean folks can assume it will always be the way it is, and the 
> "market has spoken". "The only constants are death and taxes." :)
>
> Your right in the assertion k8s needs a place to run, but that doesn't 
> necessarily mean OpenStack, unless we help integrate it and make it a great 
> place to run it.
>
> I'm glad the IaaS stuff in OpenStack suits your needs. Thats great. It hasn't 
> served a great number of users needs though, and has been at least misleading 
> folks into believing it will for a number of years. If we want it to just be 
> an IaaS, lets free up those users to move on elsewhere.
>
> Does OpenStack intend just to be an IaaS in a few years or something else?
>

Why should it strive to be anything except an excellent building block
for other technologies?

Containers have a community and there's no reason we should think of that
as separate from us. We're friends, and we overlap when it makes sense.

All the container tech that I see out there still needs computers /
disks / networks to run on. OpenStack is a pretty decent way to chop your
computers up and give fully automated control of all aspects of them to
multiple tenants.  What else would you want it to do that isn't already
an aspect of Kubernetes, CloudFoundry, etc.?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat][murano][app-catalog] The future of the App Catalog

2017-03-10 Thread Michael Glasgow

On 3/9/2017 6:08 AM, Thierry Carrez wrote:

Christopher Aedo wrote:

On Mon, Mar 6, 2017 at 3:26 AM, Thierry Carrez  wrote:

[...]
In parallel, Docker developed a pretty successful containerized
application marketplace (the Docker Hub), with hundreds of thousands of
regularly-updated apps. Keeping the App Catalog around (including its
thinly-wrapped Docker container Murano packages) make us look like we
are unsuccessfully trying to compete with that ecosystem, while
OpenStack is in fact completely complementary.


Without something like Murano "thinly wrapping" docker apps, how would
you propose current users of OpenStack clouds deploy docker apps?  Or
any other app for that matter?  It seems a little unfair to talk about
murano apps this way when no reasonable alternative exists for easily
deploying docker apps.  When I look back at the recent history of how
we've handled containers (nova-docker, magnum, kubernetes, etc) it
does not seem like we're making it easy for the folks who want to
deploy a container on their cloud...

[...]
I just think that adding the Murano abstraction in the middle of it and
using an AppCatalog-provided Murano-powered generic Docker container
wrapper is introducing unnecessary options and complexity -- options
that are strategically hurting us when we talk to those adjacent
communities...


I don't disagree with any of your observations thus far, but I'm curious 
what people think this portends for the future of Murano with respect to 
non-containerized workloads.


Let's assume for a moment that VMs aren't going away tomorrow.  Some 
won't agree, but I'm not sure that whole debate adds a lot of value here.


In that context, Murano is interesting to me because it seems like the 
OO-like abstraction it provides is the right layer at which to link 
application components for such workloads, where you have, say, a Fruit 
class that can be extended for Apples and Oranges, and any type of 
Animal can come along and consume any type of Fruit.  While not a 
panacea, there are some clear advantages to working at this layer 
relative to trying to link everything together at the level of Heat, for 
example.


For this strategy to work, a critical element will be driving 
standardization in those interfaces.  I had seen the App Catalog as a 
venue for driving that, not necessarily today but possibly at some point 
in the future.  It's not the *only* place to do that, and after batting 
it around with some of the guys here, I'm starting to think it's not 
even the best place to do it.  But it was a thought I had when first 
reading this thread.


It makes sense to me that for container workloads, the COE should handle 
all of this orchestration, and OpenStack should just get out of the way. 
 But in the case of VMs, Murano's abstraction seems useful and holds 
the promise of reducing overall complexity.  So if we truly believe that 
OpenStack and containers are complementary, it would be great if someone 
can articulate a vision for that relationship.


To be clear, I have no strong preference wrt the future of the App 
Catalog.  If anything, I'd lean toward retirement for all the reasons 
that have been given.  But I do wish that someone more familiar than me 
with this area could speak to the longer term vision for Murano. 
Granted it's an orthogonal concern, but clearly this decision will have 
some effects on its future.


--
Michael Glasgow

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2017-03-10 20:47:55 +:
> Just because the market hasn't immediately shifted from IaaS to containers 
> doesn't mean it won't happen eventually, and that google's wrong in their 
> push for containers over IaaS. It took a long time (still ongoing) to move 
> from physical machines to vm's. The same parallel will happen with containers 
> I believe. We're at least a few years out from it becoming more common place. 
> That doesn't mean folks can assume it will always be the way it is, and the 
> "market has spoken". "The only constants are death and taxes." :)
> 
> Your right in the assertion k8s needs a place to run, but that doesn't 
> necessarily mean OpenStack, unless we help integrate it and make it a great 
> place to run it.
> 
> I'm glad the IaaS stuff in OpenStack suits your needs. Thats great. It hasn't 
> served a great number of users needs though, and has been at least misleading 
> folks into believing it will for a number of years. If we want it to just be 
> an IaaS, lets free up those users to move on elsewhere.
> 
> Does OpenStack intend just to be an IaaS in a few years or something else?
> 

Why should it strive to be anything except an excellent building block
for other technologies?

Containers have a community and there's no reason we should think of that
as separate from us. We're friends, and we overlap when it makes sense.

All the container tech that I see out there still needs computers /
disks / networks to run on. OpenStack is a pretty decent way to chop your
computers up and give fully automated control of all aspects of them to
multiple tenants.  What else would you want it to do that isn't already
an aspect of Kubernetes, CloudFoundry, etc.?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What do we want to be when we grow up?

2017-03-10 Thread Joshua Harlow

This thread is an extraction of:

http://lists.openstack.org/pipermail/openstack-dev/2017-March/thread.html#113362

And is a start of someideas for what we as a group want our vision to 
be. I'm going to start by taking what Zane put up (see above thread) 
already and mutate it.


* Infinite scaling (kept the same as); be nice to target this, although 
it may be a bit abstract with the definition I saw, but meh, that's ok, 
gotta start somewhere.


* Granularity of allocation (kept as is).

* Be opinionated; let's actually pick *specific* technologies based on 
well thought out decisions about what we want out of those technologies 
and integrate them deeply (and if we make a bad decision, that's ok, we 
are all grown ups and we'll deal with it). IMHO it hasn't turned out 
well trying to have drivers for everything and everyone so let's umm 
stop doing that.


* Leads others; we are one of the older cloud foundations (I think?) so 
we should be leading others such as the CNCF and such, so we must be 
heavily outreaching to these others and helping them learn from our 
mistakes (in all reality I don't quite know why we need a openstack 
foundation as an entity in the first place, instead of say just joining 
the linux foundation and playing nicely with others there).


* Granularity of allocation (doesn't feel like this should need 
mentioning anymore, since I sort of feel its implicit now-a-days but 
fair enough, might as well keep it for the sake of remembering it).


* Full control of infrastructure (mostly discard it); I don't think we 
necessarily need to have full control of infrastructure anymore. I'd 
rather target something that builds on the layers of others at this 
point and offers value there. If it is really needed provide a 
light-weight *opinionated* version of nova, cinder, neutron that the 
upper layers can use (perhaps this light-weight version is what becomes 
of the current IAAS projects as they exist).


* Hardware virtualization (seems mostly implicit now-a-days)

* Built-in reliability (same as above, if we don't do this we should all 
look around for jobs elsewhere)


* Application control - (securely) (same as above)

* Integration - cloud services that effectively form part of the user's 
application can communicate amongst themselves, where appropriate, 
without the need for client-side glue (see also: Built-in reliability).


   - Ummm maybe, if this creates yet another ecosystem where only the 
things inside that ecosystem work with each other, then nope, I veto 
that; if it means the services created work with other services over 
standardized APIs (that are bigger than a single ecosystem) then I'm ok 
with that.


* Interoperability - kept as is (though I can't really say how many 
public clouds there are anymore to interoperate with).


* Self-healing - whatever services we write should heal and scale 
themselves, if a operator has to twiddle some settings or get called up 
at night due to something busting itself, we failed.


* Self-degradation - whatever services we write should be able to 
degrade there functionality *automatically* taking into account there 
surroundings (also related to self-healing).


* Heavily embrace that fact that a growing number of users don't 
actually want to own any kind of server (including myself) - amazon 
lambda or equivalent may be worth energy to actually make a reality.


* Move beyond copying what others have already done (ie aws) and develop 
the equivalent of a cross-company research 'arm?' that can utilize the 
smart people we have to actually develop leading edge solutions (and be 
ok with those solutions failing, cause they may).


* More (as I think of them I'll write them)

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][nova][cinder] Testing of a microversioned world

2017-03-10 Thread Matthew Treinish
On Fri, Mar 10, 2017 at 12:34:31PM -0800, Ken'ichi Ohmichi wrote:
> Hi John,
> 
> Now Tempest is testing microversions only for Nova and contains some
> testing framework for re-using for another projects.
> On this framework, we can implement necessary microversions tests as
> we want and actually many microversions of Nova are not tested by
> Tempest.
> We can see the tested microversion of Nova on
> https://github.com/openstack/tempest/blob/master/doc/source/microversion_testing.rst#microversion-tests-implemented-in-tempest
> 
> Before implementing microversion testing for Cinder, we will implement
> JSON-Schema validation for API responses for Cinder.
> The validation will be helpful for testing base microversion of Cinder
> API and we will be able to implement the microversion tests based on
> that.
> This implementation is marked as 7th priority in this Pike cycle as
> https://etherpad.openstack.org/p/pike-qa-priorities
> 
> In addition, now Cinder V3 API is not tested. So we are going to
> enable v3 tests with some restructure of Tempest in this cycle.
> The detail is described on the part of "Volume API" of
> https://etherpad.openstack.org/p/tempest-api-versions-in-pike


Umm, I don't know what you're basing that on, but there have been cinder v3
tests and cinder microversion support in Tempest since Newton. It was initially
added in this patch: https://review.openstack.org/#/c/300639/

-Matt Treinish


> 
> 2017-03-10 11:37 GMT-08:00 John Griffith :
> > Hey Everyone,
> >
> > So along the lines of an earlier thread that went out regarding testing of
> > deprecated API's and Tempest etc [1].
> >
> > Now that micro-versions are *the API versioning scheme to rule them all* one
> > question I've not been able to find an answer for is what we're going to
> > promise here for support and testing.  My understanding thus far is that the
> > "community" approach here is "nothing is ever deprecated, and everything is
> > supported forever".
> >
> > That's sort of a tall order IMO, but ok.  I've already had some questions
> > from folks about implementing an explicit Tempest test for every
> > micro-versioned implementation of an API call also.  My response has been
> > "nahh, just always test latest available".  This kinda means that we're not
> > testing/supporting the previous versions as promised though.
> >
> > Anyway; I'm certain that between Nova and the API-WG this has come up and is
> > probably addressed, just wondering if somebody can point me to some
> > documentation or policies in this respect.
> >
> > Thanks,
> > John


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][nova][cinder] Testing of a microversioned world

2017-03-10 Thread Andrea Frittoli
On Fri, Mar 10, 2017 at 8:39 PM Ken'ichi Ohmichi 
wrote:

> Hi John,
>
> Now Tempest is testing microversions only for Nova and contains some
> testing framework for re-using for another projects.
> On this framework, we can implement necessary microversions tests as
> we want and actually many microversions of Nova are not tested by
> Tempest.
> We can see the tested microversion of Nova on
>
> https://github.com/openstack/tempest/blob/master/doc/source/microversion_testing.rst#microversion-tests-implemented-in-tempest
>
> Before implementing microversion testing for Cinder, we will implement
> JSON-Schema validation for API responses for Cinder.
> The validation will be helpful for testing base microversion of Cinder
> API and we will be able to implement the microversion tests based on
> that.
> This implementation is marked as 7th priority in this Pike cycle as
> https://etherpad.openstack.org/p/pike-qa-priorities
>
> In addition, now Cinder V3 API is not tested. So we are going to
> enable v3 tests with some restructure of Tempest in this cycle.
> The detail is described on the part of "Volume API" of
> https://etherpad.openstack.org/p/tempest-api-versions-in-pike
>
> Thanks
> Ken Ohmichi
>
> ---
>
> 2017-03-10 11:37 GMT-08:00 John Griffith :
> > Hey Everyone,
> >
> > So along the lines of an earlier thread that went out regarding testing
> of
> > deprecated API's and Tempest etc [1].
> >
> > Now that micro-versions are *the API versioning scheme to rule them all*
> one
> > question I've not been able to find an answer for is what we're going to
> > promise here for support and testing.  My understanding thus far is that
> the
> > "community" approach here is "nothing is ever deprecated, and everything
> is
> > supported forever".
> >
> > That's sort of a tall order IMO, but ok.  I've already had some questions
> > from folks about implementing an explicit Tempest test for every
> > micro-versioned implementation of an API call also.  My response has been
> > "nahh, just always test latest available".  This kinda means that we're
> not
> > testing/supporting the previous versions as promised though.
>

We had a couple of sessions related to this topic at the PTG [0][1].

We agreed that we want to still maintain integration tests only in Tempest,
which means that API micro versions that have no integration impact can be
tested via functional tests.

Since we do schema validation, when someone wants to develop an integration
tests for a specific micro version, there may be a gap in the schemas
implemented on Tempest side and the current one - we had this case already
in nova.
We decided we shall monitor this gap, and fill it with schema definition at
least at the end of each cycle, or more often if required.

Tests can define which range or micro versions they are applicable to.
Initial tests for v3 in cinder will have no min_version, and they may get a
max version if they're behaviour is not valid anymore beyond a certain
microversion. Tests developed specifically for a micro versions will have
min_version set accordingly.

In terms of which versions we test in the gate, for nova we always run with
min_microversion = None and max_microversion = latest, which means that all
tests will be executed.
Since micro versions are incremental, and each micro version usually
involves no or one test on Tempest side, I think it will be a while before
this becomes an issue for the common gate.

andrea

[0] https://etherpad.openstack.org/p/qa-ptg-pike-microversion
[1] https://etherpad.openstack.org/p/qa-ptg-pike-schema-validation
[2] https://docs.openstack.org/developer/tempest/microversion_testing.html

> >
> > Anyway; I'm certain that between Nova and the API-WG this has come up
> and is
> > probably addressed, just wondering if somebody can point me to some
> > documentation or policies in this respect.
> >
> > Thanks,
> > John
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Fox, Kevin M
Thats the power of opensource. You don't HAVE to do it with investors and 
business plans. You can do it in a garage, if you have the right idea! :)

Thanks,
Kevin

From: Clint Byrum [cl...@fewbar.com]
Sent: Friday, March 10, 2017 11:03 AM
To: openstack-dev
Subject: Re: [openstack-dev] [tc][appcat] The future of the App Catalog

Excerpts from Joshua Harlow's message of 2017-03-10 10:09:24 -0800:
> Clint Byrum wrote:
> > Excerpts from Joshua Harlow's message of 2017-03-09 21:53:58 -0800:
> >> Renat Akhmerov wrote:
>  On 10 Mar 2017, at 06:02, Zane Bitter  >  wrote:
> 
>  On 08/03/17 11:23, David Moreau Simard wrote:
> > The App Catalog, to me, sounds sort of like a weird message that
> > OpenStack somehow requires applications to be
> > packaged/installed/deployed differently.
> > If anything, perhaps we should spend more effort on advertising that
> > OpenStack provides bare metal or virtual compute resources and that
> > apps will work just like any other places.
>  Look, it's true that legacy apps from the 90s will run on any VM you
>  can give them. But the rest of the world has spent the last 15 years
>  moving on from that. Applications of the future, and increasingly the
>  present, span multiple VMs/containers, make use of services provided
>  by the cloud, and interact with their own infrastructure. And users
>  absolutely will need ways of packaging and deploying them that work
>  with the underlying infrastructure. Even those apps from the 90s
>  should be taking advantage of things like e.g. Neutron security
>  groups, configuration of which is and will always be out of scope for
>  Docker Hub images.
> 
>  So no, we should NOT spend more effort on advertising that we aim to
>  become to cloud what Subversion is to version control. We've done far
>  too much of that already IMHO.
> >>> 100% agree with that.
> >>>
> >>> And this whole discussion is taking me to the question: is there really
> >>> any officially accepted strategy for OpenStack for 1, 3, 5 years?
> >> I can propose what I would like for a strategy (it's not more VMs and
> >> more neutron security groups...), though if it involves (more) design by
> >> committee, count me out.
> >>
> >> I honestly believe we have to do the equivalent of a technology leapfrog
> >> if we actually want to be relevant; but maybe I'm to eager...
> >>
> >
> > Open source isn't really famous for technology leapfrogging.
>
> Time to get famous.
>
> I hate accepting what the status quo is just because it's not been
> famous (or easy, or turned out, or ...) before.
>

Good luck. I can't see how you get an investor to enable you to do that
in this context without an absolute _mountain_ of relatively predictable
service-industry profit involved.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Mistral Custom Actions API Design

2017-03-10 Thread Ryan Brady
On Fri, Mar 10, 2017 at 5:16 AM, Renat Akhmerov 
wrote:

>
> On 10 Mar 2017, at 15:09, Dougal Matthews  wrote:
>
> On 10 March 2017 at 04:22, Renat Akhmerov 
> wrote:
>
>> Hi,
>>
>> I probably like the base class approach better too.
>>
>> However, I’m trying to understand if we need this variety of classes.
>>
>>- Do we need a separate class for asynchronous actions? IMO, since
>>is_sync() is just an instance method that can potentially return both True
>>and False based on the instance state shouldn’t be introduced by a special
>>class. Otherwise it’s confusing that a classes declared as AsyncAction can
>>actually be synchronous (if its is_sync() returns True). So maybe we 
>> should
>>just leave this method in the base class.
>>- I”m also wondering if we should just always pass “context” into
>>run() method. Those action implementations that don’t need it could just
>>ignore it. Not sure though.
>>
>> This is a good point. I had originally thought it would be backwards
> incompatible to make this change - however, users will need to update their
> actions to inherit from mistral-lib so they will need to opt in. Then in
> mistral we can do something like...
>
> if isinstance(action, mistral_lib.Action):
> action.run(ctx)
> else:
> # deprecation warning about action now inheriting from mistral_lib and
> taking a context etc.
> action.run()
>
>>
> Yes, right.
>

The example here by Dougal looks like the way to move forward.  Thanks for
all of the feedback.


>
> As far as mixin approach, I’d say I’d be ok with having mixing for
>> context-based actions. Although, like Dougal said, it may be a little
>> harder to read, this approach gives a huge flexibility for long term.
>> Imagine if we want to have a class of actions that some different kind of
>> information. Just making it up… For example, some of my actions need to be
>> aware of some policies (Congress-like) or information about metrics of the
>> current operating system (this is probably a bad example because it’s easy
>> to use standard Python modules but I’m just trying to illustrate the idea).
>> In this case we could have PolicyMixin and OperatingSystemMixin that would
>> set required info into the instance state or provide with handle interfaces
>> for more advanced uses.
>>
>
> I like the idea of mixins if we can see a future with many small
> components that can be included in an action class. However, like you I
> didn't manage to think of any real examples.
>
> It should be possible to migrate to a mixin approach later if we have the
> need.
>
>
> Well, I didn’t manage to find real use cases probably because I don’t
> develop lots of actions :) Although the example with policies seems almost
> real to me. This is something that was raised several times during design
> sessions in the past. Anyway, I agree with you that seems like we can add
> mixins later if we want to. I don’t see any reasons now why not.
>
>
One of the pain points for me as an action developer is the OpenStack
actions[1].  Since they all use the same method name to retrieve the
underlying client, you cannot simply inherit from more than one so you are
forced to rewrite the client access methods.  We saw this in creating
actions for TripleO[2].  In the base action in TripleO, we have actions
that make calls to more than one OpenStack client and so we end up
re-writing and maintaining code.  IMO the idea of using multiple
inheritance there would be helpful.  It may not require the mixin approach
here, but rather a design change in the generator to ensure the method
names don't match.

[1] https://github.com/openstack/mistral/blob/master/mistral
/actions/openstack/actions.py
[2] https://github.com/openstack/tripleo-common/blob/master/
tripleo_common/actions/base.py#L27


> Renat Akhmerov
> @Nokia
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Ryan Brady
Cloud Engineering
rbr...@redhat.com
919.890.8925 <(919)%20890-8925>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Fox, Kevin M
Just because the market hasn't immediately shifted from IaaS to containers 
doesn't mean it won't happen eventually, and that google's wrong in their push 
for containers over IaaS. It took a long time (still ongoing) to move from 
physical machines to vm's. The same parallel will happen with containers I 
believe. We're at least a few years out from it becoming more common place. 
That doesn't mean folks can assume it will always be the way it is, and the 
"market has spoken". "The only constants are death and taxes." :)

Your right in the assertion k8s needs a place to run, but that doesn't 
necessarily mean OpenStack, unless we help integrate it and make it a great 
place to run it.

I'm glad the IaaS stuff in OpenStack suits your needs. Thats great. It hasn't 
served a great number of users needs though, and has been at least misleading 
folks into believing it will for a number of years. If we want it to just be an 
IaaS, lets free up those users to move on elsewhere.

Does OpenStack intend just to be an IaaS in a few years or something else?

Thanks,
Kevin

From: Monty Taylor [mord...@inaugust.com]
Sent: Friday, March 10, 2017 9:27 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc][appcat] The future of the App Catalog

On 03/10/2017 10:59 AM, Clint Byrum wrote:
> Excerpts from Joshua Harlow's message of 2017-03-09 21:53:58 -0800:
>> Renat Akhmerov wrote:
>>>
 On 10 Mar 2017, at 06:02, Zane Bitter > wrote:

 On 08/03/17 11:23, David Moreau Simard wrote:
> The App Catalog, to me, sounds sort of like a weird message that
> OpenStack somehow requires applications to be
> packaged/installed/deployed differently.
> If anything, perhaps we should spend more effort on advertising that
> OpenStack provides bare metal or virtual compute resources and that
> apps will work just like any other places.

 Look, it's true that legacy apps from the 90s will run on any VM you
 can give them. But the rest of the world has spent the last 15 years
 moving on from that. Applications of the future, and increasingly the
 present, span multiple VMs/containers, make use of services provided
 by the cloud, and interact with their own infrastructure. And users
 absolutely will need ways of packaging and deploying them that work
 with the underlying infrastructure. Even those apps from the 90s
 should be taking advantage of things like e.g. Neutron security
 groups, configuration of which is and will always be out of scope for
 Docker Hub images.

 So no, we should NOT spend more effort on advertising that we aim to
 become to cloud what Subversion is to version control. We've done far
 too much of that already IMHO.
>>>
>>> 100% agree with that.
>>>
>>> And this whole discussion is taking me to the question: is there really
>>> any officially accepted strategy for OpenStack for 1, 3, 5 years?
>>
>> I can propose what I would like for a strategy (it's not more VMs and
>> more neutron security groups...), though if it involves (more) design by
>> committee, count me out.
>>
>> I honestly believe we have to do the equivalent of a technology leapfrog
>> if we actually want to be relevant; but maybe I'm to eager...
>>
>
> Open source isn't really famous for technology leapfrogging.
>
> And before you say "but Docker.." remember that Solaris had zones 13
> years ago.
>
> What a community like ours is good at doing is gathering all the
> exciting industry leading bleeding edge chaos into a boring commodity
> platform. What Zane is saying (and I agree with) is let's make sure we see
> the whole cloud forest and not just focus on the VM trees in front of us.
>
> I'm curious what you (Josh) or Zane would change too.
> Containers/apps/kubes/etc. have to run on computers with storage and
> networks. OpenStack provides a pretty rich set of features for giving
> users computers with storage on networks, and operators a way to manage
> those. So I fail to see how that is svn to "cloud native"'s git. It
> seems more foundational and complimentary.

I agree with this strongly.

I get frustrated really quickly when people tell me that, as a user, I
_should_ want something different than what I actually do want.

It turns out, I _do_ want computers - or at least things that behave
like computers - for some percentage of my workloads. I'm not the only
one out there who wants that.

There are other humans out there who do not want computers. They don't
like computers at all. They want generic execution contexts into which
they can put their apps.

It's just as silly to tell those people that they _should_ want
computers as it is to tell me that I _should_ want a generic execution
context.

One of the wonderful things about the situation we're in now is that if
you stack a k8s on top of an OpenStack then you empower the USER to
decide 

Re: [openstack-dev] [api][qa][tc][nova][cinder] Testing of a microversioned world

2017-03-10 Thread John Griffith
On Fri, Mar 10, 2017 at 1:34 PM, Ken'ichi Ohmichi 
wrote:

> Hi John,
>
> Now Tempest is testing microversions only for Nova and contains some
> testing framework for re-using for another projects.
> On this framework, we can implement necessary microversions tests as
> we want and actually many microversions of Nova are not tested by
> Tempest.
> We can see the tested microversion of Nova on
> https://github.com/openstack/tempest/blob/master/doc/
> source/microversion_testing.rst#microversion-tests-implemented-in-tempest
>
> Before implementing microversion testing for Cinder, we will implement
> JSON-Schema validation for API responses for Cinder.
> The validation will be helpful for testing base microversion of Cinder
> API and we will be able to implement the microversion tests based on
> that.
> This implementation is marked as 7th priority in this Pike cycle as
> https://etherpad.openstack.org/p/pike-qa-priorities
>
> In addition, now Cinder V3 API is not tested. So we are going to
> enable v3 tests with some restructure of Tempest in this cycle.
> The detail is described on the part of "Volume API" of
> https://etherpad.openstack.org/p/tempest-api-versions-in-pike
>
> Thanks
> Ken Ohmichi
>
> ---
>
> 2017-03-10 11:37 GMT-08:00 John Griffith :
> > Hey Everyone,
> >
> > So along the lines of an earlier thread that went out regarding testing
> of
> > deprecated API's and Tempest etc [1].
> >
> > Now that micro-versions are *the API versioning scheme to rule them all*
> one
> > question I've not been able to find an answer for is what we're going to
> > promise here for support and testing.  My understanding thus far is that
> the
> > "community" approach here is "nothing is ever deprecated, and everything
> is
> > supported forever".
> >
> > That's sort of a tall order IMO, but ok.  I've already had some questions
> > from folks about implementing an explicit Tempest test for every
> > micro-versioned implementation of an API call also.  My response has been
> > "nahh, just always test latest available".  This kinda means that we're
> not
> > testing/supporting the previous versions as promised though.
> >
> > Anyway; I'm certain that between Nova and the API-WG this has come up
> and is
> > probably addressed, just wondering if somebody can point me to some
> > documentation or policies in this respect.
> >
> > Thanks,
> > John
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Thanks for the pointer to the doc Ken, that helps a lot.  I do have
concerns about supportability, test sprawl and life-cycle with this, but
maybe it's unwarranted.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Fox, Kevin M
But there is plenty of software out there that opensource has not made a boring 
commodity platform. AWS has tons of services that are quite useful that have no 
real implementation today. I have users being pushed out of OpenStack onto AWS 
simply because it is so much cheaper to build applications on top of AWS as the 
common services on AWS mean they don't need to implement the bits themselves.

This is where OpenStack could still get a competitive advantage. But on the 
current trajectory, the plumbing those types of things need to use in OpenStack 
today are not agile enough.

Take Trove as an example. The goal is to provide database as a service. The 
user doesn't care if it is launched on a vm, or if it happens through 
containers. But the latter is much faster to launch now, easier to code for, 
and easier to administrate for operators. But we're stuck trying to add more 
and more layers of abstraction to add compatibility with all the ways of 
deploying it on vm's and containers, rather then pick a winner and just go with 
it.

Thanks,
Kevin

From: Clint Byrum [cl...@fewbar.com]
Sent: Friday, March 10, 2017 8:59 AM
To: openstack-dev
Subject: Re: [openstack-dev] [tc][appcat] The future of the App Catalog

Excerpts from Joshua Harlow's message of 2017-03-09 21:53:58 -0800:
> Renat Akhmerov wrote:
> >
> >> On 10 Mar 2017, at 06:02, Zane Bitter  >> > wrote:
> >>
> >> On 08/03/17 11:23, David Moreau Simard wrote:
> >>> The App Catalog, to me, sounds sort of like a weird message that
> >>> OpenStack somehow requires applications to be
> >>> packaged/installed/deployed differently.
> >>> If anything, perhaps we should spend more effort on advertising that
> >>> OpenStack provides bare metal or virtual compute resources and that
> >>> apps will work just like any other places.
> >>
> >> Look, it's true that legacy apps from the 90s will run on any VM you
> >> can give them. But the rest of the world has spent the last 15 years
> >> moving on from that. Applications of the future, and increasingly the
> >> present, span multiple VMs/containers, make use of services provided
> >> by the cloud, and interact with their own infrastructure. And users
> >> absolutely will need ways of packaging and deploying them that work
> >> with the underlying infrastructure. Even those apps from the 90s
> >> should be taking advantage of things like e.g. Neutron security
> >> groups, configuration of which is and will always be out of scope for
> >> Docker Hub images.
> >>
> >> So no, we should NOT spend more effort on advertising that we aim to
> >> become to cloud what Subversion is to version control. We've done far
> >> too much of that already IMHO.
> >
> > 100% agree with that.
> >
> > And this whole discussion is taking me to the question: is there really
> > any officially accepted strategy for OpenStack for 1, 3, 5 years?
>
> I can propose what I would like for a strategy (it's not more VMs and
> more neutron security groups...), though if it involves (more) design by
> committee, count me out.
>
> I honestly believe we have to do the equivalent of a technology leapfrog
> if we actually want to be relevant; but maybe I'm to eager...
>

Open source isn't really famous for technology leapfrogging.

And before you say "but Docker.." remember that Solaris had zones 13
years ago.

What a community like ours is good at doing is gathering all the
exciting industry leading bleeding edge chaos into a boring commodity
platform. What Zane is saying (and I agree with) is let's make sure we see
the whole cloud forest and not just focus on the VM trees in front of us.

I'm curious what you (Josh) or Zane would change too.
Containers/apps/kubes/etc. have to run on computers with storage and
networks. OpenStack provides a pretty rich set of features for giving
users computers with storage on networks, and operators a way to manage
those. So I fail to see how that is svn to "cloud native"'s git. It
seems more foundational and complimentary.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][nova][cinder] Testing of a microversioned world

2017-03-10 Thread Ken'ichi Ohmichi
Hi John,

Now Tempest is testing microversions only for Nova and contains some
testing framework for re-using for another projects.
On this framework, we can implement necessary microversions tests as
we want and actually many microversions of Nova are not tested by
Tempest.
We can see the tested microversion of Nova on
https://github.com/openstack/tempest/blob/master/doc/source/microversion_testing.rst#microversion-tests-implemented-in-tempest

Before implementing microversion testing for Cinder, we will implement
JSON-Schema validation for API responses for Cinder.
The validation will be helpful for testing base microversion of Cinder
API and we will be able to implement the microversion tests based on
that.
This implementation is marked as 7th priority in this Pike cycle as
https://etherpad.openstack.org/p/pike-qa-priorities

In addition, now Cinder V3 API is not tested. So we are going to
enable v3 tests with some restructure of Tempest in this cycle.
The detail is described on the part of "Volume API" of
https://etherpad.openstack.org/p/tempest-api-versions-in-pike

Thanks
Ken Ohmichi

---

2017-03-10 11:37 GMT-08:00 John Griffith :
> Hey Everyone,
>
> So along the lines of an earlier thread that went out regarding testing of
> deprecated API's and Tempest etc [1].
>
> Now that micro-versions are *the API versioning scheme to rule them all* one
> question I've not been able to find an answer for is what we're going to
> promise here for support and testing.  My understanding thus far is that the
> "community" approach here is "nothing is ever deprecated, and everything is
> supported forever".
>
> That's sort of a tall order IMO, but ok.  I've already had some questions
> from folks about implementing an explicit Tempest test for every
> micro-versioned implementation of an API call also.  My response has been
> "nahh, just always test latest available".  This kinda means that we're not
> testing/supporting the previous versions as promised though.
>
> Anyway; I'm certain that between Nova and the API-WG this has come up and is
> probably addressed, just wondering if somebody can point me to some
> documentation or policies in this respect.
>
> Thanks,
> John
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][astara-neutron]

2017-03-10 Thread Mark McClain
Development of the project has been quiet for some time as the developers have 
moved onto to other work.  The main project repo is astara [1] and which had a 
few commits last fall. At summit in Barcelona, we solicited for those 
interested in continuing Astara.  For those interested in continuing Astara, 
I’m happy to help transition resources.

mark

[1] http://git.openstack.org/cgit/openstack/astara/log/

> On Mar 10, 2017, at 10:32 AM, Dariusz Śmigiel  
> wrote:
> 
> Hey,
> astara-neutron repo seems to be broken for a long time [1].
> Last merged patches are dated on July 2016 [2]. After that, there is a
> queue with jenkins -1s. It doesn't look like to be maintained anymore.
> 
> Is there an interest in the Community to fix that?
> 
> BR,
> dasm
> 
> [1] https://review.openstack.org/#/q/project:openstack/astara-neutron
> [2] https://review.openstack.org/#/c/340033/
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Zane Bitter

On 10/03/17 11:59, Clint Byrum wrote:
 What a community like ours is good at doing is gathering all the

exciting industry leading bleeding edge chaos into a boring commodity
platform. What Zane is saying (and I agree with) is let's make sure we see
the whole cloud forest and not just focus on the VM trees in front of us.

I'm curious what you (Josh) or Zane would change too.


Here's something simple that I've been pushing for 2 years and we still 
haven't done yet:


http://www.zerobanana.com/archive/2015/04/24#a-vision-for-openstack

There's plenty more to say, but perhaps this is not the place. If Josh 
starts a new thread I'll be happy to contribute.



Containers/apps/kubes/etc. have to run on computers with storage and
networks. OpenStack provides a pretty rich set of features for giving
users computers with storage on networks, and operators a way to manage
those. So I fail to see how that is svn to "cloud native"'s git. It
seems more foundational and complimentary.


I should have clarified the svn remark. It was a discussion about 
marketing, not technology.


svn is solid, stable, ubiquitous in a certain class of old-school 
Enterprise shops, and completely irrelevant to the future of software 
development. That's the future we're in danger of sleepwalking into.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-03-10 Thread Dan Sneddon
On 03/10/2017 08:26 AM, Heidi Joy Tretheway wrote:
> Hi TripleO team, 
> 
> Here’s an update on your project logo. Our illustrator tried to be as
> true as possible to your original, while ensuring it matched the line
> weight, color palette and style of the rest. We also worked to make sure
> that three Os in the logo are preserved. Thanks for your patience as we
> worked on this! Feel free to direct feedback to me.
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

This is a huge improvement! Some of the previous drafts looked more like
a generic bird and less like an owl.

I have a suggestion on how this might be more owl-like. If you look at
real owl faces [1], you will see that their eyes are typically yellow,
and they often have a white circle around the eyes (black pupil, yellow
eye, black/white circle of feathers). I think that we could add a yellow
ring around the black pupil, and possibly accentuate the ears (since
owls often have white tufts on their ears).

I whipped up a quick example of what I'm talking about, it's attached
(hopefully it will survive the mailing list).

[1] - https://www.google.com/search?q=owl+face=isch=u=univ

-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api] POST /api-wg/news

2017-03-10 Thread Ed Leafe
On Mar 9, 2017, at 9:30 PM, Everett Toews  wrote:

>> For those who don't know, Everett and I started the API-WG a few years ago,
> 
> Umm...given this statement, I feel I _must_ provide some clarity.
> 
> The API WG was started by myself, Jay Pipes, and Chris Yeoh. The ball got 
> rolling in this email thread [2]. One of our first artifacts was this wiki 
> page [3]. The 3 of us were the original core members (I'm not sure if [4] is 
> capable of displaying history). Then we started making commits to the repo 
> [5]. I'll leave it as an exercise to the reader to determine who got 
> involved, when they got involved, and in what capacity.

Sorry if my memory is shaky - mea culpa. I thought Everett and I had had 
discussions about starting something like this in early 2014 back when we were 
working at Rackspace and both fighting the inconsistency of the OpenStack APIs. 
But true, nothing concrete was done at that time.

What I *do* remember quite clearly when I returned to the world of OpenStack in 
late 2014 was that Everett was very active in making the group a relevant force 
for improvement, and has been ever since then. That is the main point I want to 
emphasize.

-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][nova][cinder] Testing of a microversioned world

2017-03-10 Thread John Griffith
On Fri, Mar 10, 2017 at 12:37 PM, John Griffith 
wrote:

> Hey Everyone,
>
> So along the lines of an earlier thread that went out regarding testing of
> deprecated API's and Tempest etc [1].
>
> Now that micro-versions are *the API versioning scheme to rule them all*
> one question I've not been able to find an answer for is what we're going
> to promise here for support and testing.  My understanding thus far is that
> the "community" approach here is "nothing is ever deprecated, and
> everything is supported forever".
>
> That's sort of a tall order IMO, but ok.  I've already had some questions
> from folks about implementing an explicit Tempest test for every
> micro-versioned implementation of an API call also.  My response has been
> "nahh, just always test latest available".  This kinda means that we're not
> testing/supporting the previous versions as promised though.
>
> Anyway; I'm certain that between Nova and the API-WG this has come up and
> is probably addressed, just wondering if somebody can point me to some
> documentation or policies in this respect.
>
> Thanks,
> John
>
​Ooops:
  [1]:
http://lists.openstack.org/pipermail/openstack-dev/2017-March/113727.html​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api][qa][tc][nova][cinder] Testing of a microversioned world

2017-03-10 Thread John Griffith
Hey Everyone,

So along the lines of an earlier thread that went out regarding testing of
deprecated API's and Tempest etc [1].

Now that micro-versions are *the API versioning scheme to rule them all*
one question I've not been able to find an answer for is what we're going
to promise here for support and testing.  My understanding thus far is that
the "community" approach here is "nothing is ever deprecated, and
everything is supported forever".

That's sort of a tall order IMO, but ok.  I've already had some questions
from folks about implementing an explicit Tempest test for every
micro-versioned implementation of an API call also.  My response has been
"nahh, just always test latest available".  This kinda means that we're not
testing/supporting the previous versions as promised though.

Anyway; I'm certain that between Nova and the API-WG this has come up and
is probably addressed, just wondering if somebody can point me to some
documentation or policies in this respect.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][glance][keystone][cinder] Testing of deprecated API versions

2017-03-10 Thread John Griffith
On Fri, Mar 10, 2017 at 9:51 AM, Sean McGinnis 
wrote:

> > >
> > As far as I can tell:
> > - Cinder v1 if I'm not mistaken has been deprecated in Juno, so it's
> > deprecated in all supported releases.
> > - Glance v1 has been deprecated in Newton, so it's deprecated in all
> > supported releases
> > - Keystone v2 has been deprecated in Mitaka, so testing *must* stay in
> > Tempest until Mitaka EOL, which is in a month from now
> >
> > We should stop testing these three api versions in the common gate
> > including stable branches now (except for keystone v2 on stable/mitaka
> > which can run for one more month).
> >
> > Are cinder / glance / keystone willing to take over the API tests and run
> > them in their own gate until removal of the API version?
> >
> > Doug
>
> With Cinder's v1 API being deprecated for quite awhile now, I would
> actually prefer to just remove all tempest tests and drop the API
> completely. There was some concern about removal a few cycles back since
> there was concern (rightly so) that a lot of deployments and a lot of
> users were still using it.
>

​+1​

>
> I think it has now been marked as deprecated long enough that if anyone
> is still using it, it's just out of obstinance. We've removed the v1
> api-ref documentation, and the default in the client has been v2 for
> awhile.
>
> Unless there's a strong objection, and a valid argument to support it,
> I really would just like to drop v1 from Cinder and not waste any more
> cycles on redoing tempest tests and reconfiguring jobs to support
> something we have stated for over two years that we were no longer going
> to support. Juno went EOL in December of 2015. I really hope it's safe
> now to remove.
>
> Sean
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Jay Pipes

On 03/10/2017 01:37 PM, Zane Bitter wrote:

On 09/03/17 23:57, Renat Akhmerov wrote:

And this whole discussion is taking me to the question: is there really
any officially accepted strategy for OpenStack for 1, 3, 5 years? Is
there any ultimate community goal we’re moving to regardless of
underlying technologies (containers, virtualization etc.)? I know we’re
now considering various community goals like transition to Python 3.5
etc. but these goals don’t tell anything about our future as an IT
ecosystem from user perspective. I may assume that I’m just not aware of
it. I’d be glad if it was true. I’m eager to know the answers for these
questions.


Me too! There was one effort started by John Garbutt in December, a
technical vision for OpenStack in the form of a TC resolution in the
governance repo:

https://review.openstack.org/#/c/401226/

I wasn't a fan of the draft content, but I believe his intention was to
seed it with a formalised version of our de-facto historical position
(tl;dr legacy apps from the 90s). As long as that's a starting point for
the discussion and not a conclusion then I think this was a valuable
contribution. I commented with a non-exhaustive list of things that I
would expect to see at least debated in a vision for a cloud computing
platform, which I will reproduce here since it's relevant to this thread:

* Infinite scaling - the ability in principle to scale from zero to an
arbitrarily large number of users without rewriting your application
(e.g. if your application can store one file in Swift then there's no
theoretical limit to how many it can store. c.f. Cinder where at some
point you'd have to start juggling multiple volumes.)
* Granularity of allocation - pay only for the resources you actually
use, rather than to allocate a chunk that you may or may not be using
(so Nova->containers->FaaS, Cinder->Swift, Trove->??? [RIP MagnetoDB], )
* Full control of infrastructure - notwithstanding the above, maintain
Nova/Cinder/Neutron/Trove/ so that legacy applications, highly
specialised applications, and higher-level services like PaaS can make
fully customised use of the virtual infrastructure.
* Hardware virtualisation - make anything that might typically be done
in hardware available in a multi-tenant software-defined environment:
servers, routers, load balancers, firewalls, video codecs, GPGPUs, FPGAs...
* Built-in reliability - don't require even the smallest apps to have 3
VMs + a cluster manager to enforce any reliability guarantees; provide
those guarantees using multi-tenant services that efficiently share
resources between applications (see also: Infinite scaling, Granularity
of allocation).
* Application control - (securely) give applications control over their
own infrastructure, so that no part of the application needs to reside
outside of the cloud.
* Integration - cloud services that effectively form part of the user's
application can communicate amongst themselves, where appropriate,
without the need for client-side glue (see also: Built-in reliability).
* Interoperability - the same applications can be deployed on a variety
of private and public OpenStack clouds.


Those are all interesting technical concepts to think about and discuss. 
However, what Kevin said originally in his response about the OpenStack 
community needing to decide what exactly it *is* and what scope 
OpenStack should pursue, is the real foundational question that needs to 
be addressed. Until it is, none of the rest of these topics have much 
contextual relevance and are just a distraction, IMHO.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2017-03-10 10:09:24 -0800:
> Clint Byrum wrote:
> > Excerpts from Joshua Harlow's message of 2017-03-09 21:53:58 -0800:
> >> Renat Akhmerov wrote:
>  On 10 Mar 2017, at 06:02, Zane Bitter  >  wrote:
> 
>  On 08/03/17 11:23, David Moreau Simard wrote:
> > The App Catalog, to me, sounds sort of like a weird message that
> > OpenStack somehow requires applications to be
> > packaged/installed/deployed differently.
> > If anything, perhaps we should spend more effort on advertising that
> > OpenStack provides bare metal or virtual compute resources and that
> > apps will work just like any other places.
>  Look, it's true that legacy apps from the 90s will run on any VM you
>  can give them. But the rest of the world has spent the last 15 years
>  moving on from that. Applications of the future, and increasingly the
>  present, span multiple VMs/containers, make use of services provided
>  by the cloud, and interact with their own infrastructure. And users
>  absolutely will need ways of packaging and deploying them that work
>  with the underlying infrastructure. Even those apps from the 90s
>  should be taking advantage of things like e.g. Neutron security
>  groups, configuration of which is and will always be out of scope for
>  Docker Hub images.
> 
>  So no, we should NOT spend more effort on advertising that we aim to
>  become to cloud what Subversion is to version control. We've done far
>  too much of that already IMHO.
> >>> 100% agree with that.
> >>>
> >>> And this whole discussion is taking me to the question: is there really
> >>> any officially accepted strategy for OpenStack for 1, 3, 5 years?
> >> I can propose what I would like for a strategy (it's not more VMs and
> >> more neutron security groups...), though if it involves (more) design by
> >> committee, count me out.
> >>
> >> I honestly believe we have to do the equivalent of a technology leapfrog
> >> if we actually want to be relevant; but maybe I'm to eager...
> >>
> >
> > Open source isn't really famous for technology leapfrogging.
> 
> Time to get famous.
> 
> I hate accepting what the status quo is just because it's not been 
> famous (or easy, or turned out, or ...) before.
> 

Good luck. I can't see how you get an investor to enable you to do that
in this context without an absolute _mountain_ of relatively predictable
service-industry profit involved.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-03-10 Thread Brent Eagles
On Fri, Mar 10, 2017 at 12:56 PM, Heidi Joy Tretheway <
heidi...@openstack.org> wrote:

> Hi TripleO team,
>
> Here’s an update on your project logo. Our illustrator tried to be as true
> as possible to your original, while ensuring it matched the line weight,
> color palette and style of the rest. We also worked to make sure that three
> Os in the logo are preserved. Thanks for your patience as we worked on
> this! Feel free to direct feedback to me.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
Nice! It feels like a refinement, not a departure.

Cheers,

Brent
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-03-10 Thread Emilien Macchi
On Fri, Mar 10, 2017 at 12:43 PM, Jiří Stránský  wrote:
> On 10.3.2017 17:26, Heidi Joy Tretheway wrote:
>>
>> Hi TripleO team,
>>
>> Here’s an update on your project logo. Our illustrator tried to be as true
>> as
>> possible to your original, while ensuring it matched the line weight,
>> color
>> palette and style of the rest. We also worked to make sure that three Os
>> in the
>> logo are preserved. Thanks for your patience as we worked on this! Feel
>> free to
>> direct feedback to me.

It looks excellent to me. Thanks Heidi!

>
> I think it's great! The connection to our current logo is pretty clear to
> me, so hopefully we wouldn't be confusing anyone too much by switching to
> the new logo. Thanks for the effort!
>
> Also personally i like the color scheme change to a more playful/cartoony
> look as you mentioned in your other e-mail.
>
> Jirka
>
>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-03-10 Thread Jay Faulkner

> On Mar 10, 2017, at 8:28 AM, Heidi Joy Tretheway  
> wrote:
> 
> Hi Ironic team, 
> Here’s an update on your project logo. Our illustrator tried to be as true as 
> possible to your original, while ensuring it matched the line weight, color 
> palette and style of the rest. Thanks for your patience as we worked on this! 
> Feel free to direct feedback to me; we really want to get this right for you. 
> 
> 
> 


+1, this is a great evolution of the existing Pixie Boots. 

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Zane Bitter

On 09/03/17 23:57, Renat Akhmerov wrote:

And this whole discussion is taking me to the question: is there really
any officially accepted strategy for OpenStack for 1, 3, 5 years? Is
there any ultimate community goal we’re moving to regardless of
underlying technologies (containers, virtualization etc.)? I know we’re
now considering various community goals like transition to Python 3.5
etc. but these goals don’t tell anything about our future as an IT
ecosystem from user perspective. I may assume that I’m just not aware of
it. I’d be glad if it was true. I’m eager to know the answers for these
questions.


Me too! There was one effort started by John Garbutt in December, a 
technical vision for OpenStack in the form of a TC resolution in the 
governance repo:


https://review.openstack.org/#/c/401226/

I wasn't a fan of the draft content, but I believe his intention was to 
seed it with a formalised version of our de-facto historical position 
(tl;dr legacy apps from the 90s). As long as that's a starting point for 
the discussion and not a conclusion then I think this was a valuable 
contribution. I commented with a non-exhaustive list of things that I 
would expect to see at least debated in a vision for a cloud computing 
platform, which I will reproduce here since it's relevant to this thread:


* Infinite scaling - the ability in principle to scale from zero to an 
arbitrarily large number of users without rewriting your application 
(e.g. if your application can store one file in Swift then there's no 
theoretical limit to how many it can store. c.f. Cinder where at some 
point you'd have to start juggling multiple volumes.)
* Granularity of allocation - pay only for the resources you actually 
use, rather than to allocate a chunk that you may or may not be using 
(so Nova->containers->FaaS, Cinder->Swift, Trove->??? [RIP MagnetoDB], )
* Full control of infrastructure - notwithstanding the above, maintain 
Nova/Cinder/Neutron/Trove/ so that legacy applications, highly 
specialised applications, and higher-level services like PaaS can make 
fully customised use of the virtual infrastructure.
* Hardware virtualisation - make anything that might typically be done 
in hardware available in a multi-tenant software-defined environment: 
servers, routers, load balancers, firewalls, video codecs, GPGPUs, FPGAs...
* Built-in reliability - don't require even the smallest apps to have 3 
VMs + a cluster manager to enforce any reliability guarantees; provide 
those guarantees using multi-tenant services that efficiently share 
resources between applications (see also: Infinite scaling, Granularity 
of allocation).
* Application control - (securely) give applications control over their 
own infrastructure, so that no part of the application needs to reside 
outside of the cloud.
* Integration - cloud services that effectively form part of the user's 
application can communicate amongst themselves, where appropriate, 
without the need for client-side glue (see also: Built-in reliability).
* Interoperability - the same applications can be deployed on a variety 
of private and public OpenStack clouds.


Anyway, the review was abandoned and moved to an etherpad (sans 
comments... sigh): 
https://etherpad.openstack.org/p/AtlantaPTG-SWG-OpenStackVision


Then the whole discussion was shelved, pending a resolution of the 
preceding discussion about a vision for the TC - IOW the TC had to 
decide whether anybody ought to care about the Technical Committee's 
technical vision for OpenStack (my 2c: it's right there in the name) 
before it could decide what that vision was. Of course you can't not 
decide - not deciding is a quieter way of deciding to stick with the 
status quo. Though given the amount of pushback they got against the 
relatively simple idea of moving off the imminently-EOL Python 2 runtime 
in a co-ordinated rather than ad-hoc fashion, I guess you couldn't 
really blame them.


*That* discussion (the TC vision one) was scheduled to happen in the 
Stewardship Working Group meeting at the PTG in Atlanta: 
https://etherpad.openstack.org/p/AtlantaPTG-SWG-TCVision


Then it was cancelled because it became clear that most of the TC 
members didn't plan to show up due to scheduling conflicts.


Looking through the TC mailing list, it appears it was rescheduled to 
happen this week at a joint TC/Board meeting in Boston. This looks to be 
one of the outputs: 
https://etherpad.openstack.org/p/ocata-12questions-exercise-board 
Hopefully we'll hear more in the next few days, since this _just_ 
happened. But progress toward a technical vision can't come fast enough 
for me - we needed this 3 years ago, and we don't *have* another year to 
figure it out.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [tc][appcat][murano][app-catalog] The future of the App Catalog

2017-03-10 Thread Joshua Harlow

Clint Byrum wrote:

Excerpts from Thierry Carrez's message of 2017-03-10 16:48:02 +0100:

Christopher Aedo wrote:

On Thu, Mar 9, 2017 at 4:08 AM, Thierry Carrez  wrote:

Christopher Aedo wrote:

On Mon, Mar 6, 2017 at 3:26 AM, Thierry Carrez  wrote:

[...]
In parallel, Docker developed a pretty successful containerized
application marketplace (the Docker Hub), with hundreds of thousands of
regularly-updated apps. Keeping the App Catalog around (including its
thinly-wrapped Docker container Murano packages) make us look like we
are unsuccessfully trying to compete with that ecosystem, while
OpenStack is in fact completely complementary.

Without something like Murano "thinly wrapping" docker apps, how would
you propose current users of OpenStack clouds deploy docker apps?  Or
any other app for that matter?  It seems a little unfair to talk about
murano apps this way when no reasonable alternative exists for easily
deploying docker apps.  When I look back at the recent history of how
we've handled containers (nova-docker, magnum, kubernetes, etc) it
does not seem like we're making it easy for the folks who want to
deploy a container on their cloud...

I'd say there are two approaches: you can use the container-native
approach ("docker run" after provisioning some container-enabled host
using Nova or K8s cluster using Magnum), or you can use the
OpenStack-native approach (zun create nginx) and have it
auto-provisioned for you. Those projects have a narrower scope, and
fully co-opt the container ecosystem without making us appear as trying
to build our own competitive application packaging/delivery/marketplace
mechanism.

I just think that adding the Murano abstraction in the middle of it and
using an AppCatalog-provided Murano-powered generic Docker container
wrapper is introducing unnecessary options and complexity -- options
that are strategically hurting us when we talk to those adjacent
communities...

OK thank you for making it clearer, now I understand where you're
coming from.  I do agree with this sentiment.  I don't have any
experience with zun but it sounds like it's the least-cost way to
deploy a docker at for the environments where it's installed.

I think overall the app catalog was an interesting experiment, but I
don't think it makes sense to continue as-is.  Unless someone comes up
with a compelling new direction, I don't see much point in keeping it
running.  Especially since it sounds like Mirantis is on board (and
the connection to a murano ecosystem was the only thing I saw that
might be interesting).

Right -- it's also worth noting that I'm only talking about the App
Catalog here, not about Murano. Zun might be a bit too young for us to
place all our eggs in the same basket, and some others pointed to how
Murano is still viable alternative package for things that are more
complex than a set of containers. What I'm questioning at the moment is
the continued existence of a marketplace that did not catch fire as much
as we hoped -- an app marketplace with not enough apps just hurts more
than it helps imho.

In particular, I'm fine if (for example) the Docker-wrapper murano
package ends up being shipped as a standard/example package /in/ Murano,
and continues to exist there as a "reasonable alternative for easily
deploying docker apps" :)



While we were debating how to do everything inside our walls, Google
and Microsoft became viable public cloud vendors along side the other
players. We now live in a true multi-cloud world (not just a theoretical
one).


Yes, please if we could stop thinking as a community that everyone 
and everything is inside the openstack wall; and that every company that 
deploys or uses openstack only uses things inside that wall (because 
they don't); companies don't IMHO care anymore (if they ever did) if a 
project is in the openstack wall or not, they care about it being useful 
and working and maintainable/sustainable.




And what I see when I look outside our walls is not people trying to make
the initial steps identical or easy. For that there's PaaS. Instead, for
those that want the full control of their computers that IaaS brings,
there's a focus on making it simple, and growing a process that works
the same for the parts that are the same, and differently for the parts
that are different.

I see things like Terraform embracing the differences in clouds, not
hiding behind lowest common denominators. So if you want a Kubernetes on
GCE and one on OpenStack, you'd write two different Terraform plans
that give you the common set of servers you expect, get you config
management and kubernetes setup and hooked into the infrastructure
however it needs to be, and then get out of your way.

So, while I think it's cool to make sure we are supporting our users
when all they want is us, it might make more sense to do that outside
our walls, where we can meet the rest of the cloud world too.


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Joshua Harlow

Clint Byrum wrote:

Excerpts from Joshua Harlow's message of 2017-03-09 21:53:58 -0800:

Renat Akhmerov wrote:

On 10 Mar 2017, at 06:02, Zane Bitter>  wrote:

On 08/03/17 11:23, David Moreau Simard wrote:

The App Catalog, to me, sounds sort of like a weird message that
OpenStack somehow requires applications to be
packaged/installed/deployed differently.
If anything, perhaps we should spend more effort on advertising that
OpenStack provides bare metal or virtual compute resources and that
apps will work just like any other places.

Look, it's true that legacy apps from the 90s will run on any VM you
can give them. But the rest of the world has spent the last 15 years
moving on from that. Applications of the future, and increasingly the
present, span multiple VMs/containers, make use of services provided
by the cloud, and interact with their own infrastructure. And users
absolutely will need ways of packaging and deploying them that work
with the underlying infrastructure. Even those apps from the 90s
should be taking advantage of things like e.g. Neutron security
groups, configuration of which is and will always be out of scope for
Docker Hub images.

So no, we should NOT spend more effort on advertising that we aim to
become to cloud what Subversion is to version control. We've done far
too much of that already IMHO.

100% agree with that.

And this whole discussion is taking me to the question: is there really
any officially accepted strategy for OpenStack for 1, 3, 5 years?

I can propose what I would like for a strategy (it's not more VMs and
more neutron security groups...), though if it involves (more) design by
committee, count me out.

I honestly believe we have to do the equivalent of a technology leapfrog
if we actually want to be relevant; but maybe I'm to eager...



Open source isn't really famous for technology leapfrogging.


Time to get famous.

I hate accepting what the status quo is just because it's not been 
famous (or easy, or turned out, or ...) before.




And before you say "but Docker.." remember that Solaris had zones 13
years ago.

What a community like ours is good at doing is gathering all the
exciting industry leading bleeding edge chaos into a boring commodity
platform. What Zane is saying (and I agree with) is let's make sure we see
the whole cloud forest and not just focus on the VM trees in front of us.

I'm curious what you (Josh) or Zane would change too.
Containers/apps/kubes/etc. have to run on computers with storage and
networks. OpenStack provides a pretty rich set of features for giving
users computers with storage on networks, and operators a way to manage
those. So I fail to see how that is svn to "cloud native"'s git. It
seems more foundational and complimentary.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-03-10 Thread Jiří Stránský

On 10.3.2017 17:26, Heidi Joy Tretheway wrote:

Hi TripleO team,

Here’s an update on your project logo. Our illustrator tried to be as true as
possible to your original, while ensuring it matched the line weight, color
palette and style of the rest. We also worked to make sure that three Os in the
logo are preserved. Thanks for your patience as we worked on this! Feel free to
direct feedback to me.


I think it's great! The connection to our current logo is pretty clear 
to me, so hopefully we wouldn't be confusing anyone too much by 
switching to the new logo. Thanks for the effort!


Also personally i like the color scheme change to a more 
playful/cartoony look as you mentioned in your other e-mail.


Jirka





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat][murano][app-catalog] The future of the App Catalog

2017-03-10 Thread Clint Byrum
Excerpts from Thierry Carrez's message of 2017-03-10 16:48:02 +0100:
> Christopher Aedo wrote:
> > On Thu, Mar 9, 2017 at 4:08 AM, Thierry Carrez  
> > wrote:
> >> Christopher Aedo wrote:
> >>> On Mon, Mar 6, 2017 at 3:26 AM, Thierry Carrez  
> >>> wrote:
>  [...]
>  In parallel, Docker developed a pretty successful containerized
>  application marketplace (the Docker Hub), with hundreds of thousands of
>  regularly-updated apps. Keeping the App Catalog around (including its
>  thinly-wrapped Docker container Murano packages) make us look like we
>  are unsuccessfully trying to compete with that ecosystem, while
>  OpenStack is in fact completely complementary.
> >>>
> >>> Without something like Murano "thinly wrapping" docker apps, how would
> >>> you propose current users of OpenStack clouds deploy docker apps?  Or
> >>> any other app for that matter?  It seems a little unfair to talk about
> >>> murano apps this way when no reasonable alternative exists for easily
> >>> deploying docker apps.  When I look back at the recent history of how
> >>> we've handled containers (nova-docker, magnum, kubernetes, etc) it
> >>> does not seem like we're making it easy for the folks who want to
> >>> deploy a container on their cloud...
> >>
> >> I'd say there are two approaches: you can use the container-native
> >> approach ("docker run" after provisioning some container-enabled host
> >> using Nova or K8s cluster using Magnum), or you can use the
> >> OpenStack-native approach (zun create nginx) and have it
> >> auto-provisioned for you. Those projects have a narrower scope, and
> >> fully co-opt the container ecosystem without making us appear as trying
> >> to build our own competitive application packaging/delivery/marketplace
> >> mechanism.
> >>
> >> I just think that adding the Murano abstraction in the middle of it and
> >> using an AppCatalog-provided Murano-powered generic Docker container
> >> wrapper is introducing unnecessary options and complexity -- options
> >> that are strategically hurting us when we talk to those adjacent
> >> communities...
> > 
> > OK thank you for making it clearer, now I understand where you're
> > coming from.  I do agree with this sentiment.  I don't have any
> > experience with zun but it sounds like it's the least-cost way to
> > deploy a docker at for the environments where it's installed.
> > 
> > I think overall the app catalog was an interesting experiment, but I
> > don't think it makes sense to continue as-is.  Unless someone comes up
> > with a compelling new direction, I don't see much point in keeping it
> > running.  Especially since it sounds like Mirantis is on board (and
> > the connection to a murano ecosystem was the only thing I saw that
> > might be interesting).
> 
> Right -- it's also worth noting that I'm only talking about the App
> Catalog here, not about Murano. Zun might be a bit too young for us to
> place all our eggs in the same basket, and some others pointed to how
> Murano is still viable alternative package for things that are more
> complex than a set of containers. What I'm questioning at the moment is
> the continued existence of a marketplace that did not catch fire as much
> as we hoped -- an app marketplace with not enough apps just hurts more
> than it helps imho.
> 
> In particular, I'm fine if (for example) the Docker-wrapper murano
> package ends up being shipped as a standard/example package /in/ Murano,
> and continues to exist there as a "reasonable alternative for easily
> deploying docker apps" :)
> 

While we were debating how to do everything inside our walls, Google
and Microsoft became viable public cloud vendors along side the other
players. We now live in a true multi-cloud world (not just a theoretical
one).

And what I see when I look outside our walls is not people trying to make
the initial steps identical or easy. For that there's PaaS. Instead, for
those that want the full control of their computers that IaaS brings,
there's a focus on making it simple, and growing a process that works
the same for the parts that are the same, and differently for the parts
that are different.

I see things like Terraform embracing the differences in clouds, not
hiding behind lowest common denominators. So if you want a Kubernetes on
GCE and one on OpenStack, you'd write two different Terraform plans
that give you the common set of servers you expect, get you config
management and kubernetes setup and hooked into the infrastructure
however it needs to be, and then get out of your way.

So, while I think it's cool to make sure we are supporting our users
when all they want is us, it might make more sense to do that outside
our walls, where we can meet the rest of the cloud world too.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Horizon][searchlight] Sharing resource type implementations

2017-03-10 Thread Tripp, Travis S
Hi Richard,

I’m headed out for vacation so won’t be able to look through it until I get 
back.  However, can you also please get an install of searchlight-ui running so 
that you can see if anything breaks?  I know you don’t typically use devstack, 
but the searchlight devstack plugin installs searchlight UI. [0]

The one thing I’m not sure about is how the resource registry handles potential 
double registrations.  So, if the resource is registered both code bases, I 
don’t know what would get loaded. 

https://review.openstack.org/#/c/444095/2/openstack_dashboard/static/app/core/instances/instances.module.js
https://github.com/openstack/searchlight-ui/blob/master/searchlight_ui/static/resources/os-nova-servers/os-nova-servers.module.js#L57

[0] https://github.com/openstack/searchlight/tree/master/devstack

Thanks,
Travis

On 3/9/17, 10:58 PM, "Richard Jones"  wrote:

Thanks, Steve!

I've put together an initial patch
https://review.openstack.org/#/c/444095/ which pulls in the
os-nova-servers module and a little extra to make it work in Horizon's
codebase. I've tried to make minimal edits to the actual code -
predominantly just editing module names. I've tested it and it mostly
works on Horizon's side \o/


 Richard

On 10 March 2017 at 14:40, McLellan, Steven  wrote:
> My expertise in this area is deeply suspect but as long as we maintain the
> mapping from the resource type names that searchlight uses 
(os-nova-servers)
> to the modules we'll be OK. If you or Rob put a patch up against horizon I
> (or a willing victim/volunteer) can test a searchlight-ui patch against 
it.
>
>
>  Original message 
> From: Richard Jones 
> Date: 3/9/17 21:13 (GMT-06:00)
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [Horizon][searchlight] Sharing resource type
> implementations
>
> Hey folks,
>
> Another potential issue is that the searchlight module structure and
> Horizon's module structure are different in a couple of respects. I
> could just retain the module structure from searchlight
> ('resources.os-nova-servers') or, preferably, I could rename those
> modules to match the Horizon structure more closely
> ('horizon.app.resources.os-nova-servers') or more strictly
> ('horizon.app.core.instances').
>
> As far as I can tell none of the module names are referenced directly
> outside of the module (apart from resources.module.js of course) so
> moving the modules shouldn't affect any existing usage in searchlight
> ui.
>
> We could bikeshed this for ages, so if I could just get Rob and Steve
> to wrestle over it or something, that'd be good. Rob's pretty scrappy.
>
>
>   Richard
>
>
> On 10 March 2017 at 09:56, Richard Jones  wrote:
>> OK, I will work on a plan that migrates the code into Horizon, thanks
>> everyone!
>>
>> Travis, can the searchlight details page stuff be done through
>> extending the base resource type in Horizon? If not, is that perhaps a
>> limitation of the extensible service?
>>
>>
>>  Richard
>>
>>
>> On 10 March 2017 at 02:20, McLellan, Steven 
>> wrote:
>>> I concur; option 4 is the only one makes sense to me and was what was
>>> intended originally. As long as we can do it in one fell swoop in one 
cyclle
>>> (preferably sooner than later) there should be no issues.
>>>
>>>
>>>
>>>
>>> On 3/9/17, 8:35 AM, "Tripp, Travis S"  wrote:
>>>
Let me get Matt B in on this discussion, but basically, option 4 is my
 initial feeling as Rob stated.

One downside we saw with this approach is that we weren’t going to be
 able to take advantage of searchlight capabilities in details pages if
 everything was in native horizon.  Although, I suppose that could be 
done by
 using the hz-if-services directive [0] if horizon will allow 
searchlight
 optimized code to be in the horizon repo.

[0]
 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/static/app/core/cloud-services/hz-if-services.directive.js

-Travis

On 3/9/17, 5:09 AM, "Rob Cresswell (rcresswe)" 
 wrote:

I tried searching the meeting logs but couldn’t find where we
 discussed this in the Searchlight meeting. The conclusion at the time 
was
 option 4 IIRC. The main thing is to make sure we get it done within one
 cycle, even if it isn’t default. this means searchlight-ui doesn’t 
have to
 carry 

Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 - are weready?

2017-03-10 Thread Michael Johnson
Hi Syed,

 

To my knowledge the LBaaS team did not create any upgrade plan or tools to move 
load balancers from V1 to V2.  The data model is significantly different (and 
better) with V2 and I suspect that caused some challenges.

I know there was a, as-is, database conversion script contributed by an 
operator/packager that might help someone develop a migration path if their 
deployment wasn’t using one of the incompatible configurations, but that would 
only be one piece to the puzzle.

 

Since development beyond security fixes for v1 halted over two releases ago and 
the last of the v1 code will be removed from OpenStack in about 32 days (mitaka 
goes EOL 4/10/17) I think it is going to be left to the last few folks still 
running LBaaS v1 to plan their migrations.  Most of the LBaaS team from the 
time of v1 deprecation are no longer on the team so we don’t really have folks 
experienced with v1 available any longer.

 

I cannot speak to how hard or easy it would be to create a heat migration 
template to recreate the v1 load balancers under v2.

 

Beyond that, I can assure you that the migration from neutron-lbaas to octavia 
will have migration procedures and tools to automate the process.

 

Michael

 

From: Syed Armani [mailto:dce3...@gmail.com] 
Sent: Friday, March 10, 2017 1:58 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 - are 
weready?

 

Folks,

 

I am going to ask the question raised by Zane one more time:

 

Is there a migration plan for Heat users who have existing stacks containing 
the v1 resources?

 

Cheers,

Syed

 

On Thu, Aug 25, 2016 at 7:10 PM, Assaf Muller  > wrote:

On Thu, Aug 25, 2016 at 7:35 AM, Gary Kotton  > wrote:
> Hi,
> At the moment it is still not clear to me the upgrade process from V1 to V2. 
> The migration script https://review.openstack.org/#/c/289595/ has yet to be 
> approved. Does this support all drivers or is this just the default reference 
> implementation driver?

The migration script doesn't have a test, so we really have no idea if
it's going to work.


> Are there people still using V1?
> Thanks
> Gary
>
> On 8/25/16, 4:25 AM, "Doug Wiegley"   > wrote:
>
>
> > On Mar 23, 2016, at 4:17 PM, Doug Wiegley   > wrote:
> >
> > Migration script has been submitted, v1 is not going anywhere from 
> stable/liberty or stable/mitaka, so it’s about to disappear from master.
> >
> > I’m thinking in this order:
> >
> > - remove jenkins jobs
> > - wait for heat to remove their jenkins jobs ([heat] added to this 
> thread, so they see this coming before the job breaks)
> > - remove q-lbaas from devstack, and any references to lbaas v1 in 
> devstack-gate or infra defaults.
> > - remove v1 code from neutron-lbaas
>
> FYI, all of the above have completed, and the final removal is in the 
> merge queue: https://review.openstack.org/#/c/286381/
>
> Mitaka will be the last stable branch with lbaas v1.
>
> Thanks,
> doug
>
> >
> > Since newton is now open for commits, this process is going to get 
> started.
> >
> > Thanks,
> > doug
> >
> >
> >
> >> On Mar 8, 2016, at 11:36 AM, Eichberger, German 
>  > wrote:
> >>
> >> Yes, it’s Database only — though we changed the agent driver in the DB 
> from V1 to V2 — so if you bring up a V2 with that database it should 
> reschedule all your load balancers on the V2 agent driver.
> >>
> >> German
> >>
> >>
> >>
> >>
> >> On 3/8/16, 3:13 AM, "Samuel Bercovici"   > wrote:
> >>
> >>> So this looks like only a database migration, right?
> >>>
> >>> -Original Message-
> >>> From: Eichberger, German [mailto:german.eichber...@hpe.com 
>  ]
> >>> Sent: Tuesday, March 08, 2016 12:28 AM
> >>> To: OpenStack Development Mailing List (not for usage questions)
> >>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
> weready?
> >>>
> >>> Ok, for what it’s worth we have contributed our migration script: 
> https://review.openstack.org/#/c/289595/ — please look at this as a starting 
> point and feel free to fix potential problems…
> >>>
> >>> Thanks,
> >>> German
> >>>
> >>>
> >>>
> >>>
> >>> On 3/7/16, 11:00 AM, "Samuel Bercovici"   > wrote:
> >>>
>  As far as I recall, you can specify the VIP in creating the LB so 
> you will end up with same IPs.
> 

[openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-03-10 Thread Heidi Joy Tretheway
Dmitry asked:  "I'm late to the party, and I don't really have any strong 
opinions about a 
TripleO logo, but why is it blue? :) I'm not sure there exist any blue owls.”

Answer: There aren’t any purple porcupines or blue polar bears, either, but the 
Barbican and Freezer mascots are in those colors as they fit with the overall 
style of playful colors and cartoon interpretations of creatures. While it’s 
possible to change the colors, I’d caution against it as we already have quite 
a lot of mascots in the brown/beige/yellow color set. That doesn’t mean it’s 
not possible, just a note of caution. We’ll proceed with a final once we have 
your team’s full feedback.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Monty Taylor
On 03/10/2017 10:59 AM, Clint Byrum wrote:
> Excerpts from Joshua Harlow's message of 2017-03-09 21:53:58 -0800:
>> Renat Akhmerov wrote:
>>>
 On 10 Mar 2017, at 06:02, Zane Bitter > wrote:

 On 08/03/17 11:23, David Moreau Simard wrote:
> The App Catalog, to me, sounds sort of like a weird message that
> OpenStack somehow requires applications to be
> packaged/installed/deployed differently.
> If anything, perhaps we should spend more effort on advertising that
> OpenStack provides bare metal or virtual compute resources and that
> apps will work just like any other places.

 Look, it's true that legacy apps from the 90s will run on any VM you
 can give them. But the rest of the world has spent the last 15 years
 moving on from that. Applications of the future, and increasingly the
 present, span multiple VMs/containers, make use of services provided
 by the cloud, and interact with their own infrastructure. And users
 absolutely will need ways of packaging and deploying them that work
 with the underlying infrastructure. Even those apps from the 90s
 should be taking advantage of things like e.g. Neutron security
 groups, configuration of which is and will always be out of scope for
 Docker Hub images.

 So no, we should NOT spend more effort on advertising that we aim to
 become to cloud what Subversion is to version control. We've done far
 too much of that already IMHO.
>>>
>>> 100% agree with that.
>>>
>>> And this whole discussion is taking me to the question: is there really
>>> any officially accepted strategy for OpenStack for 1, 3, 5 years?
>>
>> I can propose what I would like for a strategy (it's not more VMs and 
>> more neutron security groups...), though if it involves (more) design by 
>> committee, count me out.
>>
>> I honestly believe we have to do the equivalent of a technology leapfrog 
>> if we actually want to be relevant; but maybe I'm to eager...
>>
> 
> Open source isn't really famous for technology leapfrogging.
> 
> And before you say "but Docker.." remember that Solaris had zones 13
> years ago.
> 
> What a community like ours is good at doing is gathering all the
> exciting industry leading bleeding edge chaos into a boring commodity
> platform. What Zane is saying (and I agree with) is let's make sure we see
> the whole cloud forest and not just focus on the VM trees in front of us.
> 
> I'm curious what you (Josh) or Zane would change too.
> Containers/apps/kubes/etc. have to run on computers with storage and
> networks. OpenStack provides a pretty rich set of features for giving
> users computers with storage on networks, and operators a way to manage
> those. So I fail to see how that is svn to "cloud native"'s git. It
> seems more foundational and complimentary.

I agree with this strongly.

I get frustrated really quickly when people tell me that, as a user, I
_should_ want something different than what I actually do want.

It turns out, I _do_ want computers - or at least things that behave
like computers - for some percentage of my workloads. I'm not the only
one out there who wants that.

There are other humans out there who do not want computers. They don't
like computers at all. They want generic execution contexts into which
they can put their apps.

It's just as silly to tell those people that they _should_ want
computers as it is to tell me that I _should_ want a generic execution
context.

One of the wonderful things about the situation we're in now is that if
you stack a k8s on top of an OpenStack then you empower the USER to
decide what their workload is and which types of features it is - rather
than forcing a "cloud native" vision dreamed up by "thought leaders"
down everyone's throats.

It also turns out that the market agrees. Google App Engine was not
successful, until Google added IaaS. Azure was not successful until
Microsoft added IaaS. Amazon, who have a container service and a
serverless service is all built around the ecosystem that is centered on
... that's right ... an IaaS.

So rather than us trying to chase a thing we're not (we're not a
container or thin-app orchestration tool) - being comfortable with our
actual identity (IaaS provider of computers) and working with other
people who do other things ends up providing _users_ with the a real win.

Considering computers as some-how inherently 'better' or 'worse' than
some of the 'cloud-native' concepts is hog wash. Different users have
different needs. As Clint points out - kubernetes needs to run
_somewhere_. CloudFoundry needs to run _somewhere_. So those are at
least two other potential users who are not me and my collection of
things I want to run that want to run in computers.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-03-10 Thread Dmitry Tantsur

On 03/10/2017 05:26 PM, Heidi Joy Tretheway wrote:

Hi TripleO team,

Here’s an update on your project logo. Our illustrator tried to be as true as
possible to your original, while ensuring it matched the line weight, color
palette and style of the rest. We also worked to make sure that three Os in the
logo are preserved. Thanks for your patience as we worked on this! Feel free to
direct feedback to me.


I'm late to the party, and I don't really have any strong opinions about a 
TripleO logo, but why is it blue? :) I'm not sure there exist any blue owls.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-03-10 Thread Dmitry Tantsur

On 03/10/2017 05:28 PM, Heidi Joy Tretheway wrote:

Hi Ironic team,
Here’s an update on your project logo. Our illustrator tried to be as true as
possible to your original, while ensuring it matched the line weight, color
palette and style of the rest. Thanks for your patience as we worked on this!
Feel free to direct feedback to me; we really want to get this right for you.


Thanks, this variant is really in spirit of our original Pixie, as well as the 
other new logos. Great job!


I wonder if the eyes could be slightly bigger. Kind of a nit-pick though, it's 
fine as it is.







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-03-10 Thread Lucas Alvares Gomes
Hi,

Thanks for the update Heidi. I think this version is fine so +1 from me, it
carries the essence of the original Pixie :-)

Cheers,
Lucas

On Fri, Mar 10, 2017 at 4:28 PM, Heidi Joy Tretheway  wrote:

> Hi Ironic team,
> Here’s an update on your project logo. Our illustrator tried to be as true
> as possible to your original, while ensuring it matched the line weight,
> color palette and style of the rest. Thanks for your patience as we worked
> on this! Feel free to direct feedback to me; we really want to get this
> right for you.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][glance][keystone][cinder] Testing of deprecated API versions

2017-03-10 Thread Andrea Frittoli
On Fri, Mar 10, 2017 at 4:55 PM Sean McGinnis  wrote:

> > >
> > As far as I can tell:
> > - Cinder v1 if I'm not mistaken has been deprecated in Juno, so it's
> > deprecated in all supported releases.
> > - Glance v1 has been deprecated in Newton, so it's deprecated in all
> > supported releases
> > - Keystone v2 has been deprecated in Mitaka, so testing *must* stay in
> > Tempest until Mitaka EOL, which is in a month from now
> >
> > We should stop testing these three api versions in the common gate
> > including stable branches now (except for keystone v2 on stable/mitaka
> > which can run for one more month).
> >
> > Are cinder / glance / keystone willing to take over the API tests and run
> > them in their own gate until removal of the API version?
> >
> > Doug
>
> With Cinder's v1 API being deprecated for quite awhile now, I would
> actually prefer to just remove all tempest tests and drop the API
> completely. There was some concern about removal a few cycles back since
> there was concern (rightly so) that a lot of deployments and a lot of
> users were still using it.
>
> I think it has now been marked as deprecated long enough that if anyone
> is still using it, it's just out of obstinance. We've removed the v1
> api-ref documentation, and the default in the client has been v2 for
> awhile.
>
> Unless there's a strong objection, and a valid argument to support it,
> I really would just like to drop v1 from Cinder and not waste any more
> cycles on redoing tempest tests and reconfiguring jobs to support
> something we have stated for over two years that we were no longer going
> to support. Juno went EOL in December of 2015. I really hope it's safe
> now to remove.
>

sounds like a good plan to me.


>
> Sean
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-03-10 Thread Carlos Camacho Gonzalez
Hey

This version looks awesome, it really feels like what we currently have,
but upgraded xD

I don't see how we can keep the head feathers, so forth,

+1

Thanks Heidi for your effort!

Cheers,
Carlos.

On Fri, Mar 10, 2017 at 5:36 PM, Jason Rist  wrote:

> On 03/10/2017 09:26 AM, Heidi Joy Tretheway wrote:
> > Hi TripleO team,
> >
> > Here’s an update on your project logo. Our illustrator tried to be as
> true as
> > possible to your original, while ensuring it matched the line weight,
> color
> > palette and style of the rest. We also worked to make sure that three Os
> in the
> > logo are preserved. Thanks for your patience as we worked on this! Feel
> free to
> > direct feedback to me.
> >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> This is a good interpretation of the original, possibly even making it
> better.  I like it.  Thanks Heidi!
>
> -J
>
> --
> Jason E. Rist
> Senior Software Engineer
> OpenStack User Interfaces
> Red Hat, Inc.
> Freenode: jrist
> github/twitter: knowncitizen
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2017-03-09 21:53:58 -0800:
> Renat Akhmerov wrote:
> >
> >> On 10 Mar 2017, at 06:02, Zane Bitter  >> > wrote:
> >>
> >> On 08/03/17 11:23, David Moreau Simard wrote:
> >>> The App Catalog, to me, sounds sort of like a weird message that
> >>> OpenStack somehow requires applications to be
> >>> packaged/installed/deployed differently.
> >>> If anything, perhaps we should spend more effort on advertising that
> >>> OpenStack provides bare metal or virtual compute resources and that
> >>> apps will work just like any other places.
> >>
> >> Look, it's true that legacy apps from the 90s will run on any VM you
> >> can give them. But the rest of the world has spent the last 15 years
> >> moving on from that. Applications of the future, and increasingly the
> >> present, span multiple VMs/containers, make use of services provided
> >> by the cloud, and interact with their own infrastructure. And users
> >> absolutely will need ways of packaging and deploying them that work
> >> with the underlying infrastructure. Even those apps from the 90s
> >> should be taking advantage of things like e.g. Neutron security
> >> groups, configuration of which is and will always be out of scope for
> >> Docker Hub images.
> >>
> >> So no, we should NOT spend more effort on advertising that we aim to
> >> become to cloud what Subversion is to version control. We've done far
> >> too much of that already IMHO.
> >
> > 100% agree with that.
> >
> > And this whole discussion is taking me to the question: is there really
> > any officially accepted strategy for OpenStack for 1, 3, 5 years?
> 
> I can propose what I would like for a strategy (it's not more VMs and 
> more neutron security groups...), though if it involves (more) design by 
> committee, count me out.
> 
> I honestly believe we have to do the equivalent of a technology leapfrog 
> if we actually want to be relevant; but maybe I'm to eager...
> 

Open source isn't really famous for technology leapfrogging.

And before you say "but Docker.." remember that Solaris had zones 13
years ago.

What a community like ours is good at doing is gathering all the
exciting industry leading bleeding edge chaos into a boring commodity
platform. What Zane is saying (and I agree with) is let's make sure we see
the whole cloud forest and not just focus on the VM trees in front of us.

I'm curious what you (Josh) or Zane would change too.
Containers/apps/kubes/etc. have to run on computers with storage and
networks. OpenStack provides a pretty rich set of features for giving
users computers with storage on networks, and operators a way to manage
those. So I fail to see how that is svn to "cloud native"'s git. It
seems more foundational and complimentary.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][astara-neutron]

2017-03-10 Thread Dariusz Śmigiel
> Looks like their IRC channel [0] died off awhile ago as well.
>
> [0] http://eavesdrop.openstack.org/irclogs/%23openstack-astara/
>

Sean thanks,
didn't think about checking their IRC channel. But based on that [0]
it seems like there won't be any more involvement.

[0] 
http://eavesdrop.openstack.org/irclogs/%23openstack-astara/%23openstack-astara.2016-10-07.log.html#t2016-10-07T16:53:17

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] - Best release to upgrade from LBaaS v1 to v2

2017-03-10 Thread Michael Johnson
Yes, folks have recently deployed the dashboard with success.  I think you
had that discussion on the IRC channel, so I won't repeat it here.

Please note, the neutron-lbaas-dashboard does not support LBaaS v1, you must
have LBaaS v2 deployed for the neutron-lbaas-dashboard to work.  If you are
trying to use LBaaS v1, you can use the legacy panels included in the older
versions of horizon.

The question asked is a very old question and unfortunately the "Ask" site
doesn't do search or notifications very well.  This question hasn't come up
on our notification lists.  Sigh.

If you think there is an open bug for the dashboard, please report it in
https://bugs.launchpad.net/neutron-lbaas-dashboard

Michael

-Original Message-
From: Saverio Proto [mailto:saverio.pr...@switch.ch] 
Sent: Friday, March 10, 2017 8:04 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] - Best release to upgrade from
LBaaS v1 to v2

I spent the all day trying to deploy an Horizon instance with working panels
for LBaaSv2.
https://github.com/openstack/neutron-lbaas-dashboard

I tried stable/ocata and I am never able to list existing load balancers or
create a new loadbalancer.

Looks like I am not the only one with this issue:
https://ask.openstack.org/en/question/96790/lbaasv2-dashboard-issues/

Is there anyone that has a working setup ?

Should I open a bug here?
https://bugs.launchpad.net/octavia/+filebug

Thanks

Saverio


On 09/03/17 16:19, Saverio Proto wrote:
> Hello,
> 
> I managed to do the database migration.
> 
> I had to skip this logic:
> https://github.com/openstack/neutron-lbaas/blob/master/tools/database-
> migration-from-v1-to-v2.py#L342-L353
> 
> I had to force flag=True
> 
> That code obviously breaks if you have LBaaS used by more than 1 tenant.
> 
> What was the goal ? to make sure that a given healthmonitor is not 
> reused in multiple pools ?
> 
> Should the right approach be to check if these two values are the same ?:
> 
> select count(DISTINCT monitor_id) from poolmonitorassociations; select 
> count(monitor_id) from poolmonitorassociations;
> 
> Second question: should the old tables from LBaaSV1 be dropped ?
> 
> Please give me feedback so I can fix the code and submit a review.
> 
> thank you
> 
> Saverio
> 
> 
> On 09/03/17 13:38, Saverio Proto wrote:
>>> I would recommend experimenting with the 
>>> database-migration-from-v1-to-v2.py
>>> script and working with your vendor (if you are using a vendor load 
>>> balancing engine) on a migration path.
>>
>>
>> Hello,
>> there is no vendor here to help us :)
>>
>> I made a backup of the current DB.
>>
>> I identified this folder on our Neutron server:
>>
>> /usr/lib/python2.7/dist-packages/neutron_lbaas/db/migration ; tree .
>> |-- alembic_migrations
>> |   |-- env.py
>> |   |-- env.pyc
>> |   |-- __init__.py
>> |   |-- __init__.pyc
>> |   |-- README
>> |   |-- script.py.mako
>> |   `-- versions
>> |   |-- 364f9b6064f0_agentv2.py
>> |   |-- 364f9b6064f0_agentv2.pyc
>> |   |-- 4b6d8d5310b8_add_index_tenant_id.py
>> |   |-- 4b6d8d5310b8_add_index_tenant_id.pyc
>> |   |-- 4ba00375f715_edge_driver.py
>> |   |-- 4ba00375f715_edge_driver.pyc
>> |   |-- 4deef6d81931_add_provisioning_and_operating_statuses.py
>> |   |-- 4deef6d81931_add_provisioning_and_operating_statuses.pyc
>> |   |-- CONTRACT_HEAD
>> |   |-- EXPAND_HEAD
>> |   |-- kilo_release.py
>> |   |-- kilo_release.pyc
>> |   |-- lbaasv2.py
>> |   |-- lbaasv2.pyc
>> |   |-- lbaasv2_tls.py
>> |   |-- lbaasv2_tls.pyc
>> |   |-- liberty
>> |   |   |-- contract
>> |   |   |   |-- 130ebfdef43_initial.py
>> |   |   |   `-- 130ebfdef43_initial.pyc
>> |   |   `-- expand
>> |   |   |-- 3345facd0452_initial.py
>> |   |   `-- 3345facd0452_initial.pyc
>> |   |-- mitaka
>> |   |   `-- expand
>> |   |   |-- 3426acbc12de_add_flavor_id.py
>> |   |   |-- 3426acbc12de_add_flavor_id.pyc
>> |   |   |-- 3543deab1547_add_l7_tables.py
>> |   |   |-- 3543deab1547_add_l7_tables.pyc
>> |   |   |-- 4a408dd491c2_UpdateName.py
>> |   |   |-- 4a408dd491c2_UpdateName.pyc
>> |   |   |-- 62deca5010cd_add_tenant_id_index_for_l7_tables.py
>> |   |   |-- 62deca5010cd_add_tenant_id_index_for_l7_tables.pyc
>> |   |   |-- 6aee0434f911_independent_pools.py
>> |   |   `-- 6aee0434f911_independent_pools.pyc
>> |   |-- start_neutron_lbaas.py
>> |   `-- start_neutron_lbaas.pyc
>> |-- __init__.py
>> `-- __init__.pyc
>>
>> 7 directories, 40 files
>>
>> Now here it says: "Create a revision file"
>>
>> https://github.com/openstack/neutron-lbaas/blob/master/tools/database
>> -migration-from-v1-to-v2.py#L30
>>
>> There is some specific openstack-dev documentation to "Create a 
>> revision file" or I should just learn the Alembic tool ? I never used it
before.
>>
>> So far I did copy the alembic.ini from here:
>> 

Re: [openstack-dev] [api][qa][tc][glance][keystone][cinder] Testing of deprecated API versions

2017-03-10 Thread Sean McGinnis
> >
> As far as I can tell:
> - Cinder v1 if I'm not mistaken has been deprecated in Juno, so it's
> deprecated in all supported releases.
> - Glance v1 has been deprecated in Newton, so it's deprecated in all
> supported releases
> - Keystone v2 has been deprecated in Mitaka, so testing *must* stay in
> Tempest until Mitaka EOL, which is in a month from now
> 
> We should stop testing these three api versions in the common gate
> including stable branches now (except for keystone v2 on stable/mitaka
> which can run for one more month).
> 
> Are cinder / glance / keystone willing to take over the API tests and run
> them in their own gate until removal of the API version?
> 
> Doug

With Cinder's v1 API being deprecated for quite awhile now, I would
actually prefer to just remove all tempest tests and drop the API
completely. There was some concern about removal a few cycles back since
there was concern (rightly so) that a lot of deployments and a lot of
users were still using it.

I think it has now been marked as deprecated long enough that if anyone
is still using it, it's just out of obstinance. We've removed the v1
api-ref documentation, and the default in the client has been v2 for
awhile.

Unless there's a strong objection, and a valid argument to support it,
I really would just like to drop v1 from Cinder and not waste any more
cycles on redoing tempest tests and reconfiguring jobs to support
something we have stated for over two years that we were no longer going
to support. Juno went EOL in December of 2015. I really hope it's safe
now to remove.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Mistral Custom Actions API Design

2017-03-10 Thread lương hữu tuấn
Hi,

I did not know about this change before but after reading the whole story,
IMHO i myself love the way of keeping it as simple at this moment as you
guys i think agreed on. For MixinAction or PolicyMixin, etc. we can develop
them later when we have concrete case studies.

Br,

Tuan/Nokia

On Fri, Mar 10, 2017 at 11:16 AM, Renat Akhmerov 
wrote:

>
> On 10 Mar 2017, at 15:09, Dougal Matthews  wrote:
>
> On 10 March 2017 at 04:22, Renat Akhmerov 
> wrote:
>
>> Hi,
>>
>> I probably like the base class approach better too.
>>
>> However, I’m trying to understand if we need this variety of classes.
>>
>>- Do we need a separate class for asynchronous actions? IMO, since
>>is_sync() is just an instance method that can potentially return both True
>>and False based on the instance state shouldn’t be introduced by a special
>>class. Otherwise it’s confusing that a classes declared as AsyncAction can
>>actually be synchronous (if its is_sync() returns True). So maybe we 
>> should
>>just leave this method in the base class.
>>- I”m also wondering if we should just always pass “context” into
>>run() method. Those action implementations that don’t need it could just
>>ignore it. Not sure though.
>>
>> This is a good point. I had originally thought it would be backwards
> incompatible to make this change - however, users will need to update their
> actions to inherit from mistral-lib so they will need to opt in. Then in
> mistral we can do something like...
>
> if isinstance(action, mistral_lib.Action):
> action.run(ctx)
> else:
> # deprecation warning about action now inheriting from mistral_lib and
> taking a context etc.
> action.run()
>
>>
> Yes, right.
>
> As far as mixin approach, I’d say I’d be ok with having mixing for
>> context-based actions. Although, like Dougal said, it may be a little
>> harder to read, this approach gives a huge flexibility for long term.
>> Imagine if we want to have a class of actions that some different kind of
>> information. Just making it up… For example, some of my actions need to be
>> aware of some policies (Congress-like) or information about metrics of the
>> current operating system (this is probably a bad example because it’s easy
>> to use standard Python modules but I’m just trying to illustrate the idea).
>> In this case we could have PolicyMixin and OperatingSystemMixin that would
>> set required info into the instance state or provide with handle interfaces
>> for more advanced uses.
>>
>
> I like the idea of mixins if we can see a future with many small
> components that can be included in an action class. However, like you I
> didn't manage to think of any real examples.
>
> It should be possible to migrate to a mixin approach later if we have the
> need.
>
>
> Well, I didn’t manage to find real use cases probably because I don’t
> develop lots of actions :) Although the example with policies seems almost
> real to me. This is something that was raised several times during design
> sessions in the past. Anyway, I agree with you that seems like we can add
> mixins later if we want to. I don’t see any reasons now why not.
>
>
> Renat Akhmerov
> @Nokia
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI Squad Meeting Summary (week 10)

2017-03-10 Thread Attila Darazs
If the topics below interest you and you want to contribute to the 
discussion, feel free to join the next meeting:


Time: Thursdays, 15:30-16:30 UTC
Place: https://bluejeans.com/4113567798/

Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

I skipped last week's summary as the CI Squad was very focused on making 
the Quickstart upstream job transition deadline of March 13th.


Things are in a good shape! I want to emphasize here how well and hard 
Gabriele, Sagi and Ben worked together in the last weeks on the transition.


We had daily stand-ups in the last three days instead of just the 
regular Thursday meeting.


Our current status is: GREEN.

We have the "oooq-t-phase1"[1] gerrit topic tracking the outstanding 
changes for the transition. There's 3 of them left unmerged, all in very 
good state.


This WIP change[2] pulls together all the necessary changes and we got 
good results on the undercloud only, basic multinode and scenario 1-2 
jobs. We also reproduced the exact same failure as the current 
scenario001 job is experiencing, which is exactly what we want to see.


We expect the 3rd and 4th scenario working similarly well, as we 
previously had Quickstart only runs with them, just not through this WIP 
change.


After we merge [2], we can change the "job type" in project-config to 
"flip the switch" and have the transitioned jobs be driven by Quickstart.


We're in good shape for a potential Monday transition.

Best regards,
Attila

[1] https://review.openstack.org/#/q/topic:oooq-t-phase1
[2] https://review.openstack.org/431029

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Some information about the Forum at the Summit in Boston

2017-03-10 Thread Clint Byrum
Excerpts from Thierry Carrez's message of 2017-03-10 17:04:54 +0100:
> Ben Swartzlander wrote:
> > On 03/09/2017 12:10 PM, Jonathan Bryce wrote:
> >> Putting that aside, I appreciate your providing your input. The most
> >> consistent piece of feedback we received was around scheduling and
> >> visibility for sessions, so I think that is definitely an area for
> >> improvement at the next PTG. I heard mixed feedback on whether the
> >> ability to participate in multiple projects was better or worse than
> >> under the previous model, but understanding common conflicts ahead of
> >> time might give us a chance to schedule in a way that makes the
> >> multi-project work more possible. Did you participate in both Cinder
> >> and Manila mid-cycles in addition to the Design Summit sessions
> >> previously? Trying to understand which types of specific interactions
> >> you’re now less able to participate in.
> > 
> > Yes in the past I was able to attend all of the Manila and most of the
> > Cinder sessions at the Design summit, and I was able to attend the
> > Cinder midcycles in person and (since I'm the PTL) I was able to
> > schedule the Manila midcycles to not conflict.
> 
> On that particular bit of feedback ("making it impossible to participate
> in 2 or more vertical projects") it is feedback that I definitely heard :)
> 
> While the event structure made it generally a lot easier to tap into
> other teams (and a *lot* of them did just that), the horizontal/vertical
> split definitely made it more difficult for Manila folks to attend all
> Cinder sessions, or for Storlets folks to attend all Swift sessions. On
> a personal note, it made it more difficult for *me* to attend all
> Architecture WG and Stewardship workgroup and Release Team and Infra
> sessions, which were all going on at the same time on Monday/Tuesday. So
> it's not something that only affected vertical projects.
> 
> We can definitely improve on that, and come up with a more... creative
> way to split usage of rooms than a pretty-arbitrary grouping of projects
> into "vertical" and "horizontal" groups. There is no miracle solution
> (there will always be someone needing to be in two places at the same
> time), but the strawman split we tried for the first one is certainly
> not the optimal solution. If you have suggestions on how we can better
> map room/days, I'm all ears. I was thinking about taking input on major
> team overlaps (like the one you pointed to between Manila and Cinder) as
> a first step, and try to come up with a magic formula that would
> minimize conflicts.
> 

For those who didn't ever use it, attendees to UDS (Ubuntu Dev Summit)
would mark themselves as interested or required for sessions (in the
Launchpad blueprint system), and would express which days they'd be
at the summit.  Then a scheduling program would automatically generate
schedules with the least amount of conflicts.

I'm not saying we should copy summit's model, or (noo) use the
actual Summit django app[1]. But we _could_ use a similar algorithm,
except have project teams, instead of individuals, as the attendees,
perhaps splitting liasons/tc members/etc. off as their own groups,
and then at least have an optimal schedule generated.

A few summary points about UDS for those interested (tl;dr - It's not perfect)

 * UDS also wanted everyone to move around in the hallways every hour or
   so. There was a desire to _not_ let people "camp out" in one room. As
   a person who thrives in the bustling hallways, I like that. But those
   who need a quieter room to build confidence, and a more parliamentary
   procedure to get their voice heard are penalized. Also PTG is for
   focused hacking time too, but that could be solved by having large
   blocks for focus/pairing/etc time.

 * Running the summit scheduler in real-time as sessions and attendees
   were added created _unbelievable chaos_. There was almost always
   somebody cursing at the summit schedule screens as their most important
   session was moved to a small room, double-booked, or moved to a day
   they weren't going to be there. In my 7 or so UDS's, I think we only
   tried that for 2 of them before it was locked down the week before UDS.

 * Not running the summit scheduler in real-time meant that added
   sessions and new attendees were at a disadvantage and had to manually
   try to coordinate with the free space on the schedule. Since the
   schedule was tuned to the more static attendee base and set of
   sessions, this usually meant that the hotel bar after sessions was
   the more reliable place to have a discussion that wasn't expected.

[1] https://launchpad.net/summit

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][astara-neutron]

2017-03-10 Thread Sean McGinnis
On Fri, Mar 10, 2017 at 09:32:15AM -0600, Dariusz Śmigiel wrote:
> Hey,
> astara-neutron repo seems to be broken for a long time [1].
> Last merged patches are dated on July 2016 [2]. After that, there is a
> queue with jenkins -1s. It doesn't look like to be maintained anymore.
> 
> Is there an interest in the Community to fix that?
> 
> BR,
> dasm
> 
> [1] https://review.openstack.org/#/q/project:openstack/astara-neutron
> [2] https://review.openstack.org/#/c/340033/
> 

Looks like their IRC channel [0] died off awhile ago as well.

[0] http://eavesdrop.openstack.org/irclogs/%23openstack-astara/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-03-10 Thread Pavlo Shchelokovskyy
Hi Heidi,

imo this one is the best so far.

+1 I like it too.

Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com

On Fri, Mar 10, 2017 at 6:37 PM, Miles Gould  wrote:

> On 10/03/17 16:28, Heidi Joy Tretheway wrote:
>
>> Hi Ironic team,
>> Here’s an update on your project logo. Our illustrator tried to be as
>> true as possible to your original, while ensuring it matched the line
>> weight, color palette and style of the rest. Thanks for your patience as
>> we worked on this! Feel free to direct feedback to me; we really want to
>> get this right for you.
>>
>
> I like it!
>
> Miles
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Joshua Harlow

gordon chung wrote:


On 10/03/17 12:53 AM, Joshua Harlow wrote:

I can propose what I would like for a strategy (it's not more VMs and
more neutron security groups...), though if it involves (more) design by
committee, count me out.

I honestly believe we have to do the equivalent of a technology leapfrog
if we actually want to be relevant; but maybe I'm to eager...


seems like a manifesto waiting to be penned. probably best in a separate
thread... not sure with what tag if any. regardless, i think it'd be
good to see what others long term vision is... maybe it'll help others
consider what their own expectations are.


Ya, it sort of is, ha. I will start another thread; and put my thinking 
cap on to make sure it's a tremendously huge manifesto, lol.




cheers,



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-03-10 Thread Miles Gould

On 10/03/17 16:28, Heidi Joy Tretheway wrote:

Hi Ironic team,
Here’s an update on your project logo. Our illustrator tried to be as
true as possible to your original, while ensuring it matched the line
weight, color palette and style of the rest. Thanks for your patience as
we worked on this! Feel free to direct feedback to me; we really want to
get this right for you.


I like it!

Miles

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]

2017-03-10 Thread Ihar Hrachyshka
On Tue, Feb 21, 2017 at 7:03 AM, Roua Touihri 
wrote:

> How can we interconnect two VMs without using a bridge or a switch as an
> intermediate. That is, only via a virtual link (e.g. veth or tap). In fact,
> I see that when we create an aditional subnet and two ports of the given
> subnet. Then when I attach each port to a running VM, neutron use a bridge
> as an intermediate element. I want to create a mesh topology between
> several VMs of the same tenant in addition to or without the default
> network created by neutron.
>
>
If you don't care about neutron network at all and want different topology,
then maybe neutron is not the project that should solve the issue for you.

But what is exactly the use case for that topology in your environment?

Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-03-10 Thread Jason Rist
On 03/10/2017 09:26 AM, Heidi Joy Tretheway wrote:
> Hi TripleO team,
>
> Here’s an update on your project logo. Our illustrator tried to be as true as
> possible to your original, while ensuring it matched the line weight, color
> palette and style of the rest. We also worked to make sure that three Os in 
> the
> logo are preserved. Thanks for your patience as we worked on this! Feel free 
> to
> direct feedback to me.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
This is a good interpretation of the original, possibly even making it better.  
I like it.  Thanks Heidi!

-J

-- 
Jason E. Rist
Senior Software Engineer
OpenStack User Interfaces
Red Hat, Inc.
Freenode: jrist
github/twitter: knowncitizen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [storlets] Mascot request

2017-03-10 Thread Heidi Joy Tretheway
Hi Storlets team, 

Your team selected a storklet as a project mascot, and our illustrators 
developed this draft. While you sent us a picture of a nesting storklet (where 
we couldn’t see the legs), the illustrator thought showing the legs would best 
help distinguish this as a storklet from other birds. Feel free to send 
feedback my way!



   __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-03-10 Thread Heidi Joy Tretheway
Hi Ironic team, 
Here’s an update on your project logo. Our illustrator tried to be as true as 
possible to your original, while ensuring it matched the line weight, color 
palette and style of the rest. Thanks for your patience as we worked on this! 
Feel free to direct feedback to me; we really want to get this right for you. 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-03-10 Thread Heidi Joy Tretheway
Hi TripleO team, 

Here’s an update on your project logo. Our illustrator tried to be as true as 
possible to your original, while ensuring it matched the line weight, color 
palette and style of the rest. We also worked to make sure that three Os in the 
logo are preserved. Thanks for your patience as we worked on this! Feel free to 
direct feedback to me.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] - Best release to upgrade from LBaaS v1 to v2

2017-03-10 Thread Saverio Proto
I spent the all day trying to deploy an Horizon instance with working
panels for LBaaSv2.
https://github.com/openstack/neutron-lbaas-dashboard

I tried stable/ocata and I am never able to list existing load balancers
or create a new loadbalancer.

Looks like I am not the only one with this issue:
https://ask.openstack.org/en/question/96790/lbaasv2-dashboard-issues/

Is there anyone that has a working setup ?

Should I open a bug here?
https://bugs.launchpad.net/octavia/+filebug

Thanks

Saverio


On 09/03/17 16:19, Saverio Proto wrote:
> Hello,
> 
> I managed to do the database migration.
> 
> I had to skip this logic:
> https://github.com/openstack/neutron-lbaas/blob/master/tools/database-migration-from-v1-to-v2.py#L342-L353
> 
> I had to force flag=True
> 
> That code obviously breaks if you have LBaaS used by more than 1 tenant.
> 
> What was the goal ? to make sure that a given healthmonitor is not
> reused in multiple pools ?
> 
> Should the right approach be to check if these two values are the same ?:
> 
> select count(DISTINCT monitor_id) from poolmonitorassociations;
> select count(monitor_id) from poolmonitorassociations;
> 
> Second question: should the old tables from LBaaSV1 be dropped ?
> 
> Please give me feedback so I can fix the code and submit a review.
> 
> thank you
> 
> Saverio
> 
> 
> On 09/03/17 13:38, Saverio Proto wrote:
>>> I would recommend experimenting with the database-migration-from-v1-to-v2.py
>>> script and working with your vendor (if you are using a vendor load
>>> balancing engine) on a migration path.
>>
>>
>> Hello,
>> there is no vendor here to help us :)
>>
>> I made a backup of the current DB.
>>
>> I identified this folder on our Neutron server:
>>
>> /usr/lib/python2.7/dist-packages/neutron_lbaas/db/migration ; tree
>> .
>> |-- alembic_migrations
>> |   |-- env.py
>> |   |-- env.pyc
>> |   |-- __init__.py
>> |   |-- __init__.pyc
>> |   |-- README
>> |   |-- script.py.mako
>> |   `-- versions
>> |   |-- 364f9b6064f0_agentv2.py
>> |   |-- 364f9b6064f0_agentv2.pyc
>> |   |-- 4b6d8d5310b8_add_index_tenant_id.py
>> |   |-- 4b6d8d5310b8_add_index_tenant_id.pyc
>> |   |-- 4ba00375f715_edge_driver.py
>> |   |-- 4ba00375f715_edge_driver.pyc
>> |   |-- 4deef6d81931_add_provisioning_and_operating_statuses.py
>> |   |-- 4deef6d81931_add_provisioning_and_operating_statuses.pyc
>> |   |-- CONTRACT_HEAD
>> |   |-- EXPAND_HEAD
>> |   |-- kilo_release.py
>> |   |-- kilo_release.pyc
>> |   |-- lbaasv2.py
>> |   |-- lbaasv2.pyc
>> |   |-- lbaasv2_tls.py
>> |   |-- lbaasv2_tls.pyc
>> |   |-- liberty
>> |   |   |-- contract
>> |   |   |   |-- 130ebfdef43_initial.py
>> |   |   |   `-- 130ebfdef43_initial.pyc
>> |   |   `-- expand
>> |   |   |-- 3345facd0452_initial.py
>> |   |   `-- 3345facd0452_initial.pyc
>> |   |-- mitaka
>> |   |   `-- expand
>> |   |   |-- 3426acbc12de_add_flavor_id.py
>> |   |   |-- 3426acbc12de_add_flavor_id.pyc
>> |   |   |-- 3543deab1547_add_l7_tables.py
>> |   |   |-- 3543deab1547_add_l7_tables.pyc
>> |   |   |-- 4a408dd491c2_UpdateName.py
>> |   |   |-- 4a408dd491c2_UpdateName.pyc
>> |   |   |-- 62deca5010cd_add_tenant_id_index_for_l7_tables.py
>> |   |   |-- 62deca5010cd_add_tenant_id_index_for_l7_tables.pyc
>> |   |   |-- 6aee0434f911_independent_pools.py
>> |   |   `-- 6aee0434f911_independent_pools.pyc
>> |   |-- start_neutron_lbaas.py
>> |   `-- start_neutron_lbaas.pyc
>> |-- __init__.py
>> `-- __init__.pyc
>>
>> 7 directories, 40 files
>>
>> Now here it says: "Create a revision file"
>>
>> https://github.com/openstack/neutron-lbaas/blob/master/tools/database-migration-from-v1-to-v2.py#L30
>>
>> There is some specific openstack-dev documentation to "Create a revision
>> file" or I should just learn the Alembic tool ? I never used it before.
>>
>> So far I did copy the alembic.ini from here:
>> https://github.com/openstack/neutron/blob/master/neutron/db/migration/alembic.ini
>>
>> into /usr/lib/python2.7/dist-packages/neutron_lbaas/db/migration
>>
>> then I did run the command:
>>
>> alembic revision -m "migrate to LBaaSv2"
>>
>> as a result it created the file:
>> /usr/lib/python2.7/dist-packages/neutron_lbaas/db/migration/alembic_migrations/versions/24274573545b_migrate_to_lbaasv2.py
>>
>> Then I added the script to that file:
>> wget -O -
>> https://raw.githubusercontent.com/openstack/neutron-lbaas/master/tools/database-migration-from-v1-to-v2.py
 alembic_migrations/versions/24274573545b_migrate_to_lbaasv2.py
>>
>> I tried now:
>> neutron-db-manage upgrade heads
>>
>> but it fails with a easy stacktrace. I get stuck here:
>> https://github.com/openstack/neutron-lbaas/blob/master/tools/database-migration-from-v1-to-v2.py#L346
>>
>> of course the query:
>> select tenant_id from pools;
>>
>> returns more than 1 tenant_id
>>
>> it means I cannot migrate if I 

Re: [openstack-dev] [all] Some information about the Forum at the Summit in Boston

2017-03-10 Thread Thierry Carrez
Ben Swartzlander wrote:
> On 03/09/2017 12:10 PM, Jonathan Bryce wrote:
>> Putting that aside, I appreciate your providing your input. The most
>> consistent piece of feedback we received was around scheduling and
>> visibility for sessions, so I think that is definitely an area for
>> improvement at the next PTG. I heard mixed feedback on whether the
>> ability to participate in multiple projects was better or worse than
>> under the previous model, but understanding common conflicts ahead of
>> time might give us a chance to schedule in a way that makes the
>> multi-project work more possible. Did you participate in both Cinder
>> and Manila mid-cycles in addition to the Design Summit sessions
>> previously? Trying to understand which types of specific interactions
>> you’re now less able to participate in.
> 
> Yes in the past I was able to attend all of the Manila and most of the
> Cinder sessions at the Design summit, and I was able to attend the
> Cinder midcycles in person and (since I'm the PTL) I was able to
> schedule the Manila midcycles to not conflict.

On that particular bit of feedback ("making it impossible to participate
in 2 or more vertical projects") it is feedback that I definitely heard :)

While the event structure made it generally a lot easier to tap into
other teams (and a *lot* of them did just that), the horizontal/vertical
split definitely made it more difficult for Manila folks to attend all
Cinder sessions, or for Storlets folks to attend all Swift sessions. On
a personal note, it made it more difficult for *me* to attend all
Architecture WG and Stewardship workgroup and Release Team and Infra
sessions, which were all going on at the same time on Monday/Tuesday. So
it's not something that only affected vertical projects.

We can definitely improve on that, and come up with a more... creative
way to split usage of rooms than a pretty-arbitrary grouping of projects
into "vertical" and "horizontal" groups. There is no miracle solution
(there will always be someone needing to be in two places at the same
time), but the strawman split we tried for the first one is certainly
not the optimal solution. If you have suggestions on how we can better
map room/days, I'm all ears. I was thinking about taking input on major
team overlaps (like the one you pointed to between Manila and Cinder) as
a first step, and try to come up with a magic formula that would
minimize conflicts.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat][murano][app-catalog] The future of the App Catalog

2017-03-10 Thread Thierry Carrez
Christopher Aedo wrote:
> On Thu, Mar 9, 2017 at 4:08 AM, Thierry Carrez  wrote:
>> Christopher Aedo wrote:
>>> On Mon, Mar 6, 2017 at 3:26 AM, Thierry Carrez  
>>> wrote:
 [...]
 In parallel, Docker developed a pretty successful containerized
 application marketplace (the Docker Hub), with hundreds of thousands of
 regularly-updated apps. Keeping the App Catalog around (including its
 thinly-wrapped Docker container Murano packages) make us look like we
 are unsuccessfully trying to compete with that ecosystem, while
 OpenStack is in fact completely complementary.
>>>
>>> Without something like Murano "thinly wrapping" docker apps, how would
>>> you propose current users of OpenStack clouds deploy docker apps?  Or
>>> any other app for that matter?  It seems a little unfair to talk about
>>> murano apps this way when no reasonable alternative exists for easily
>>> deploying docker apps.  When I look back at the recent history of how
>>> we've handled containers (nova-docker, magnum, kubernetes, etc) it
>>> does not seem like we're making it easy for the folks who want to
>>> deploy a container on their cloud...
>>
>> I'd say there are two approaches: you can use the container-native
>> approach ("docker run" after provisioning some container-enabled host
>> using Nova or K8s cluster using Magnum), or you can use the
>> OpenStack-native approach (zun create nginx) and have it
>> auto-provisioned for you. Those projects have a narrower scope, and
>> fully co-opt the container ecosystem without making us appear as trying
>> to build our own competitive application packaging/delivery/marketplace
>> mechanism.
>>
>> I just think that adding the Murano abstraction in the middle of it and
>> using an AppCatalog-provided Murano-powered generic Docker container
>> wrapper is introducing unnecessary options and complexity -- options
>> that are strategically hurting us when we talk to those adjacent
>> communities...
> 
> OK thank you for making it clearer, now I understand where you're
> coming from.  I do agree with this sentiment.  I don't have any
> experience with zun but it sounds like it's the least-cost way to
> deploy a docker at for the environments where it's installed.
> 
> I think overall the app catalog was an interesting experiment, but I
> don't think it makes sense to continue as-is.  Unless someone comes up
> with a compelling new direction, I don't see much point in keeping it
> running.  Especially since it sounds like Mirantis is on board (and
> the connection to a murano ecosystem was the only thing I saw that
> might be interesting).

Right -- it's also worth noting that I'm only talking about the App
Catalog here, not about Murano. Zun might be a bit too young for us to
place all our eggs in the same basket, and some others pointed to how
Murano is still viable alternative package for things that are more
complex than a set of containers. What I'm questioning at the moment is
the continued existence of a marketplace that did not catch fire as much
as we hoped -- an app marketplace with not enough apps just hurts more
than it helps imho.

In particular, I'm fine if (for example) the Docker-wrapper murano
package ends up being shipped as a standard/example package /in/ Murano,
and continues to exist there as a "reasonable alternative for easily
deploying docker apps" :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [neutron]

2017-03-10 Thread Abdulhalim Dandoush
Dear all,

The issue is that we want to create a virtual link between two VMs on
the same physical machine (regardless of their interfaces and network
already created by neutron that inter-connect them through br-int or
ovs). Let us consider the most simple scenario: same physical machine,
same project, etc.

More specifically, let's say that using network name-spaces we can do it
easily as follow, and we want to do the same for compute VMs.

Create two net NameSpaces and inter-connect them via direct veth link

--
$ sudo ip netns add vm1

$ sudo ip netns add vm2

$ sudo ip link add name veth1 type veth peer name veth2

$ sudo ip link set dev veth2 netns vm2

$ sudo ip link set dev veth1 netns vm1

$ sudo ip netns exec vm2 ip link set dev veth2 up

$ sudo ip netns exec vm1 ip link set dev veth1 up

$ sudo ip netns exec vm2 ip address add 10.1.1.2/24 dev veth2

$ sudo ip netns exec vm1 ip address add 10.1.1.1/24 dev veth1

$ sudo ip netns exec vm1 bash

# ifconfig

veth1 Link encap:Ethernet  HWaddr 1e:d8:69:ba:76:e2

  inet addr:10.1.1.1  Bcast:0.0.0.0  Mask:255.255.255.0

  inet6 addr: fe80::1cd8:69ff:feba:76e2/64 Scope:Link

  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

  RX packets:8 errors:0 dropped:0 overruns:0 frame:0

  TX packets:8 errors:0 dropped:0 overruns:0 carrier:0

  collisions:0 txqueuelen:1000

  RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)



# ping 10.1.1.2

PING 10.1.1.2 (10.1.1.2) 56(84) bytes of data.

64 bytes from 10.1.1.2: icmp_seq=1 ttl=64 time=0.051 ms

64 bytes from 10.1.1.2: icmp_seq=2 ttl=64 time=0.061 ms

64 bytes from 10.1.1.2: icmp_seq=3 ttl=64 time=0.072 ms

^C

--- 10.1.1.2 ping statistics ---

3 packets transmitted, 3 received, 0% packet loss, time 1998ms

--

Thanks in advance,

Abdulhalim

Le 22/02/2017 02:18, 김기석 [Kiseok Kim] a écrit :
> Hello Roua Touihri,
> 
>  
> 
> I think you should consider neutron service plugin.
> 
> with neutron service plugin, you could crdate your own network service
> what you want.
> 
>  
> 
> you could insert your codes into service plugin interacting with
> event(create, delete, update) and resource(port, sunnet, network..)
> 
>  
> 
> Thank you.
> 
>  
> 
> *From:*Roua Touihri [mailto:roua.toui...@devoteam.com]
> *Sent:* Wednesday, February 22, 2017 12:03 AM
> *To:* openstack-dev@lists.openstack.org; openst...@lists.openstack.org
> *Cc:* Abdulhalim Dandoush
> *Subject:* [Openstack] [openstack-dev][neutron]
> 
>  
> 
> Hello everybody,
> 
>  
> 
> How can we interconnect two VMs without using a bridge or a switch as an
> intermediate. That is, only via a virtual link (e.g. veth or tap). In
> fact, I see that when we create an aditional subnet and two ports of the
> given subnet. Then when I attach each port to a running VM, neutron use
> a bridge as an intermediate element. I want to create a mesh topology
> between several VMs of the same tenant in addition to or without the
> default network created by neutron.
> 
>  
> 
> If such a configuration were not possible, I must then create new APIs
> for doing that. However, I do not know how to get started with this
> task. I guess that I should create new classes similarly to the
> "neutron.create_port" one and maybe to overload the veth/tap constructor.
> 
>  
> 
> Thanks in advance
> 
>  
> 
> -- 
> 
>  
> 
> ​Kindly,
> 
>  
> 
>  
> 
> R.T
> 

-- 
Abdulhalim Dandoush
---
PhD in Information Technology
Researcher-Teacher
ESME Sudria, Paris Sud
Images, Signals and Networks Lab
Tel: +33 1 56 20 62 33
Fax: +33 1 56 20 62 62

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 14

2017-03-10 Thread Chris Dent


This week in resource providers and placement we've been working to
merge some specs and verify in progress work.

# What Matters Most

As with last week, traits remain the top of the priority stack. The
spec for that merged this week so now the review focus changes to
the implementation. Links for that in the Themes section below.

# What's Changed

The ironic inventory handling work has led to a new get_inventory()
method in virt drivers:

https://review.openstack.org/#/c/441543/

The ironic implementation is the only one in progress thus far:

https://review.openstack.org/#/c/441544/

CORS support merged. If cors.allowed_origin is set in nova.conf it
is turned on for placement.

# Main Themes

## Traits

The work to normalize trait names (to be more like custom resource
classes) in the os-traits library has all merged, resulting in a new
release:

https://pypi.python.org/pypi/os-traits

The work to implement the merged spec trait is ongoing at


https://review.openstack.org/#/q/status:open+topic:bp/resource-provider-traits

There is a review to fix some typos in the spec:

https://review.openstack.org/#/c/444065/

## Shared Resource Providers

https://blueprints.launchpad.net/nova/+spec/shared-resources-pike

Progress on this will continue once traits have moved forward.

## Nested Resource Providers

https://review.openstack.org/#/q/status:open+topic:bp/nested-resource-providers

According to mriedem, the nested-resource provider spec
 should be updated to
reflect the flipboard discussion from the PTG (reported in this
space last week) about multi-parent providers and how they aren't
going to happen. Previous email:

http://lists.openstack.org/pipermail/openstack-dev/2017-March/113332.html

## Docs

https://review.openstack.org/#/q/topic:cd/placement-api-ref

The start of creating an API ref for the placement API. Not a lot
there yet as I haven't had much of an opportunity to move it along.
There is, however, enough there for additional content to be
started, if people have the opportunity to do so. Check with me to
divvy up the work if you'd like to contribute.

## Claims in the Scheduler

A "Super WIP" spec for claims in the scheduler has started at

https://review.openstack.org/#/c/437424/

As I say there, I think it is useful to discuss this stuff, but more
important to focus on getting what we already have working as that
will inform and change the future.

## Performance

We're aware that there are some redundancies in the resource tracker
that we'd like to clean up

 http://lists.openstack.org/pipermail/openstack-dev/2017-January/110953.html

but it's also the case that we've done no performance testing on the
placement service itself.

We ought to do some testing to make sure there aren't unexpected
performance drains.

# Other Code/Specs

* https://review.openstack.org/#/c/441881/
  Checking for placement microversion 1.4 in nova-status.

* https://bugs.launchpad.net/nova/+bug/1635182
  Fixing it so we don't have to add json_error_formatter everywhere.
  There's a collection of related fixes attached to that bug report.

  Pushkar, you might want to make all of those have the same topic,
  or put them in a stack of related changes.

  These are basically ready and besides cleaning up some boilerplate
  in the code, help make sure that error output is consistent.

* https://review.openstack.org/#/q/status:open+topic:valid_inventories
  Fixes that ensure that we only accept valid inventories when setting
  them.

  Needs second +2.

* https://review.openstack.org/#/c/416751/
  Removing the Allocation.create() method which was only ever used in
  tests and not in the actual, uh, creation of allocations.

* https://review.openstack.org/#/c/416669/
  DELETE all inventories on one resource provider. The spec merged,
  this is the implementation. Open question on how to manage one
  of the utility functions therein.

* https://review.openstack.org/#/c/418393/
   A spec for improving the level of detail and structure in placement
   error responses so that it is easier to distinguish between
   different types of, for example, 409 responses.

* https://review.openstack.org/#/c/423872/
   Spec for versioned-object based notification of events in the
   placement API. We had some discussion in the weekly subteam
   meeting about what the use cases are for this, starting

   
http://eavesdrop.openstack.org/meetings/nova_scheduler/2017/nova_scheduler.2017-03-06-14.00.log.html#l-120

   The gist is: "we should do this when we can articulate the
   problem the notifications will solve". Comment on the review if
   you have some opinions on that.

* https://review.openstack.org/#/c/382613/
   A little demo script for showing how a cronjob to update inventory
   on a shared resource provider might work. Last week I asked if
   this is still relevant and if not should I abandon it. I got no
   response, so it might be time to do so.


[openstack-dev] [ironic] Hardware console processes in multi-conductor environment

2017-03-10 Thread Yuriy Zveryanskyy
Hi all.

Hardware nodes consoles have some specific: limited number of

concurrent console sessions (often to 1) that can be established.

There are some issues (described below) due to conflict between

distributed ironic conductors services and local console processes.

This affect only case with local console processes, currently

shellinabox and socat for example.

There are some possible "global" solutions:

1) Pluggable internal task API [1], currently rejected by community;

2) Non-pluggable internal task API that uses external service (there

is not necessary service currently in OpenStack);

3) Custom distributed process management based on ssh access

between ironic conductor hosts (looks like a hack);

4) New console interface drivers which implements tasks management

internally (like "k8s_shellinabox", "k8s_socat").

And partial solutions (some of them proposed below) are possible.

In multi-conductor environment ironic conductor process can be

died/stopped/blocked (removed) or started/restarted (added).

Possible cases:

1) Conductor removed

a) gracefully stopped. Some daemon processes like shellinabox

for consoles can continue to run. This issue can be fixed currently

as separate bug.

b) died/killed. Daemon processes can continue to run. This issue can

be fixed only by distributed tasks management ("global" solutions above).

c) all host with conductor died. No fix needed.

2) Conductor added/restarted

New conductor try to start processes for enabled consoles, but currently

processes on conductor hosts that works with these nodes before not

stopped [2]. I see two possible solution for this issue:

1) "Untakeover" periodic task for stopping console processes.

For this solution we should not stop non-local consoles.

2) Do not stop process on old conductor. Use redefined RPC routing

(based on saved into DB conductor that started console) on API side

for set console and wait stopping via API. This routing should also

ignore dead conductors.


If you have some ideas please leave comments.

[1] https://review.openstack.org/#/c/431605/

[2] https://bugs.launchpad.net/ironic/+bug/1632192


Yuriy Zveryanskyy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][glance][keystone][cinder] Testing of deprecated API versions

2017-03-10 Thread Andrea Frittoli
On Fri, Mar 10, 2017 at 1:59 AM Ghanshyam Mann 
wrote:

On Fri, Mar 10, 2017 at 7:23 AM, Lance Bragstad  wrote:



On Thu, Mar 9, 2017 at 3:46 PM, Doug Hellmann  wrote:

Excerpts from Andrea Frittoli's message of 2017-03-09 20:53:54 +:
> Hi folks,
>
> I'm trying to figure out what's the best approach to fade out testing of
> deprecated API versions.
> We currently host in Tempest API tests for Glance API v1, Keystone API v2
> and Cinder API v1.
>
> According to the guidelines for the "follow-standard-deprecation" tag [0],
> when projects that have that tag deprecate a feature:
>
> "Code will be frozen and only receive minimal maintenance (just so that it
> continues to work as-is)."
>
> I interpret this so that projects should maintain some level of testing of
> the deprecated feature, including a deprecated API version.
> The QA team does not see value in testing deprecated API versions in the
> common gate jobs, so my question is what do to with those tests.
>
> One option is to maintain them in Tempest until the API version is
removed,
> and run them in dedicated project jobs.
> This means that tempest would have to run those jobs as well, so three
> extra jobs, until the API version is removed.
>
> The other option is to move those tests out of Tempest, into the projects.
> This would imply back porting them to all relevant branches as well, but
it
> would have the advantage of decoupling them from Tempest. It should be no
> concern from an API stability POV since the code for that API will be
> frozen.
> Tests for deprecated APIs in cinder, keystone and glance are all - as far
> as I can tell - removed or deprecated from interoperability guidelines, so
> moving the tests out of Tempest would not be an issue in that sense.
>
> The 2nd option involves a bit more initial overhead for the removal of
> tests, but I think it would works for the best on the long term.
>
> There is a 3rd option as well, which is to stop running integration
testing
> on deprecated API versions before they are actually removed, but I feel
> that would not meet the criteria defined by the
follow-standard-deprecation
> tag.
>
> Thoughts?
>
> andrea

Are any of those tests used by the interoperability working group
(formerly DefCore)?


That's a good question. I was very curious about this because last I
checked keystone had v2.0 calls required for defcore. Looks like that might
not be the case anymore [0]? I started a similar thread to this after the
PTG since that was something our group talked about extensively during the
deprecation session [1].

​Yes, it seems no Volume v1 and Keystone v2 tests usage in defcore
2017.01.json [0]​. But there are some compute tests which internally use
glance v1 API call [2]. But on mentioned flagged action- Nova already moved
to v2 APIs and tempest part is pending which can be fixed to make call on
v2 APIs only (which can be part of this work and quick).


I just checked all non-glance tests that invoke glance v1 in the gate right
now.
None of them is in the 2017.01 guideline [0], and all of them will run with
glance v2 as long as v1 is not marked as enabled.

FlavorsV2NegativeTest:test_boot_with_low_ram
ServerActionsTestJSON:test_create_backup
TestMinimumBasicScenario:test_minimum_basic_scenario
TestVolumeBootPattern:test_create_ebs_image_and_check_boot
VolumesV2ActionsTest:test_volume_upload

[0]
https://refstack.openstack.org/api/v1/guidelines/2017.01/tests?target=platform=required=true=false



​From options about deprecated APIs testing, I am with options 2 which
really take out the load of Tempest tests maintenance and gate.   ​

​But another question is about stable branch testing of those API, like
glance v1 and identity v2 APIs are supported (not deprecated) in Mitaka.
​As Tempest is responsible of testing all stable branch behavior too, Should
 we keep testing them till all Mitaka EOL (till APIs are in deprecated
state in all stable branch) ?​





[0] https://github.com/openstack/defcore/blob/master/2017.01.json
[1]
http://lists.openstack.org/pipermail/openstack-dev/2017-March/113166.html

​[2] https://git.openstack.org/cgit/openstack/defcore/tree/2017.01.json#n294
 ​



Doug

>
> [0]
>
https://governance.openstack.org/tc/reference/tags/assert_follows-standard-deprecation.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack 

[openstack-dev] [neutron][astara-neutron]

2017-03-10 Thread Dariusz Śmigiel
Hey,
astara-neutron repo seems to be broken for a long time [1].
Last merged patches are dated on July 2016 [2]. After that, there is a
queue with jenkins -1s. It doesn't look like to be maintained anymore.

Is there an interest in the Community to fix that?

BR,
dasm

[1] https://review.openstack.org/#/q/project:openstack/astara-neutron
[2] https://review.openstack.org/#/c/340033/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][glance][keystone][cinder] Testing of deprecated API versions

2017-03-10 Thread Jeremy Stanley
On 2017-03-10 14:49:34 + (+), Andrea Frittoli wrote:
[...]
> - Glance v1 has been deprecated in Newton, so it's deprecated in all
> supported releases
[...]

Newton released _after_ Mitaka, so it's still supported for a while
to come.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-docs] [dev] What's up, doc?

2017-03-10 Thread Alexandra Settle
Team team team team team,

It's been a crazy few weeks since the PTG ramping up our goals for Pike! Big 
thanks to everyone who has really hit the ground running with our new set of 
objectives.

This week I have been helping out Ianeta with the High Availability Guide ToC 
plan, (which you can review here: https://review.openstack.org/#/c/440890/ ) 
and Rob Clark with the Security Guide plan 
(https://etherpad.openstack.org/p/sec-guide-pike ). I have also been working 
alongside Brian and the nova team to unblock ourselves from a particularly 
nasty, critical, bug affecting the Install Guide. For more information: 
https://bugs.launchpad.net/openstack-manuals/+bug/1663485

Big thanks to Darren Chan who has been doing an awesome job for the last two 
weeks in keeping our bug list under control. We are down to an amazing 108 bugs 
in queue.

Next week, we have Ianeta who will be looking after the bug triage liaison role!

If you're sitting there thinking "bugs are for me, I really love triaging 
bugs!" well, you're in luck! We have a few spots open for the rest of the 
cycle: 
https://wiki.openstack.org/wiki/Documentation/SpecialityTeams#Bug_Triage_Team

== Progress towards Pike ==
https://www.timeanddate.com/countdown/newyear?p0=24=Pike+release=1=slab

Bugs closed in Pike: 32! You guys rock :)

== The Road to the Summit in Boston ==
The summit schedule has been announced: 
https://www.openstack.org/summit/boston-2017/summit-schedule/#day=2017-05-08

I will be representing the docs team at the summit for the new 'Project Update' 
session. For more information: 
https://www.openstack.org/summit/boston-2017/summit-schedule/global-search?t=Project+Update+-+Documentation

Let me know if you had any doc talks accepted!

== Speciality Team Reports ==

API - Anne Gentle: One prominent change is that the Block Storage API v2 is now 
exactly replicated with v3 and so is now marked Deprecated. The list of all 
OpenStack APIs is now also here: https://developer.openstack.org/#api and you 
can see them based on support level here: 
https://developer.openstack.org/api-guide/quick-start/


Configuration Reference and CLI Reference - Tomoyuki Kato: N/A

High Availability Guide - Ianeta Hutchinson: So, we have this patch 
https://review.openstack.org/#/c/440890/13 to establish the new ToC for HA 
Guide. We are trying to move all content that is relevant in the current guide 
over to the new ToC. From there, we will be filing bugs to start collaboration 
with the OSIC DevOps team who have signed up to adopt-a-guide and will be 
helping to provide content as SME's.

Hypervisor Tuning Guide - Blair Bethwaite: The Scientific-WG is considering 
proposing a hypervisor tuning + guide session for the Boston Forum though, so 
hopefully if that goes ahead it will create some renewed interest.

Installation guides - Lana Brindley: Waiting to sort out the last couple of 
bugs before we branch, hopefully next week.

Networking Guide - John Davidge: Huge number of patches this week on the RFC 
5737 fix[1] - thanks to caoyuan! Continued progress on replacing neutronclient 
commands with OSC versions. Thank you to all contributors. A patch[2] changing 
a documented use of ’tenant’ to ‘project’ sparked a discussion in the neutron 
team about the mixture of terminology in neutron, with the conclusion that all 
remaining references to ’tenant’ in our config files and elsewhere are to be 
replaced with ‘project’ as soon as possible. Some of these will require 
deprecation cycles. Impact on the networking guide should be minimal.
[1] https://launchpad.net/bugs/1656378  [2] 
https://review.openstack.org/#/c/43816

Operations and Architecture Design guides - Darren Chan: A patch to move the 
draft Arch Guide to docs.o.o has merged. The previous guide has been moved to a 
"to archive" directory until the archiving process is ready. The Arch Guide 
working group met this week to rearchitect and scope the design content. We're 
currently focussed on the storage content.

Security Guide - Nathaniel Dillon: Rob Clark (PTL) and Alex are in the process 
of planning the ins and out of the overhaul of the guide. We're talking about 
dramatically reducing the scope, and introducing new, maintainable, content. 
Rob (hyakuhei) is also looking at getting a sprint in Austin at some point to 
get the security team all together and pump out some content. We also have the 
OSIC Security team pumping out some of the main, non-wishlist, bugs at preset.

Training Guides - Matjaz Pancur: N/A

Training labs - Pranav Salunke, Roger Luethi: We have pushed the current state 
of the Ocata training-labs changeset in case others find it useful. The scripts 
follow the install-guide and https://review.openstack.org/#/c/438328/ . The 
resulting cluster cannot launch instances because it fails to find a host (just 
like others following the install-guide have reported).

Admin and User guides - Joseph Robinson: For the User and Admin Guides, I've 
been looping through the 

Re: [openstack-dev] [api][qa][tc][glance][keystone][cinder] Testing of deprecated API versions

2017-03-10 Thread Andrea Frittoli
On Fri, Mar 10, 2017 at 3:12 PM Lance Bragstad  wrote:

> On Fri, Mar 10, 2017 at 8:49 AM, Andrea Frittoli <
> andrea.fritt...@gmail.com> wrote:
>
>
>
> On Fri, Mar 10, 2017 at 2:24 PM Doug Hellmann 
> wrote:
>
> Excerpts from Ghanshyam Mann's message of 2017-03-10 10:55:25 +0900:
> > On Fri, Mar 10, 2017 at 7:23 AM, Lance Bragstad 
> wrote:
> >
> > >
> > >
> > > On Thu, Mar 9, 2017 at 3:46 PM, Doug Hellmann 
> > > wrote:
> > >
> > >> Excerpts from Andrea Frittoli's message of 2017-03-09 20:53:54 +:
> > >> > Hi folks,
> > >> >
> > >> > I'm trying to figure out what's the best approach to fade out
> testing of
> > >> > deprecated API versions.
> > >> > We currently host in Tempest API tests for Glance API v1, Keystone
> API
> > >> v2
> > >> > and Cinder API v1.
> > >> >
> > >> > According to the guidelines for the "follow-standard-deprecation"
> tag
> > >> [0],
> > >> > when projects that have that tag deprecate a feature:
> > >> >
> > >> > "Code will be frozen and only receive minimal maintenance (just so
> that
> > >> it
> > >> > continues to work as-is)."
> > >> >
> > >> > I interpret this so that projects should maintain some level of
> testing
> > >> of
> > >> > the deprecated feature, including a deprecated API version.
> > >> > The QA team does not see value in testing deprecated API versions
> in the
> > >> > common gate jobs, so my question is what do to with those tests.
> > >> >
> > >> > One option is to maintain them in Tempest until the API version is
> > >> removed,
> > >> > and run them in dedicated project jobs.
> > >> > This means that tempest would have to run those jobs as well, so
> three
> > >> > extra jobs, until the API version is removed.
> > >> >
> > >> > The other option is to move those tests out of Tempest, into the
> > >> projects.
> > >> > This would imply back porting them to all relevant branches as well,
> > >> but it
> > >> > would have the advantage of decoupling them from Tempest. It should
> be
> > >> no
> > >> > concern from an API stability POV since the code for that API will
> be
> > >> > frozen.
> > >> > Tests for deprecated APIs in cinder, keystone and glance are all -
> as
> > >> far
> > >> > as I can tell - removed or deprecated from interoperability
> guidelines,
> > >> so
> > >> > moving the tests out of Tempest would not be an issue in that sense.
> > >> >
> > >> > The 2nd option involves a bit more initial overhead for the removal
> of
> > >> > tests, but I think it would works for the best on the long term.
> > >> >
> > >> > There is a 3rd option as well, which is to stop running integration
> > >> testing
> > >> > on deprecated API versions before they are actually removed, but I
> feel
> > >> > that would not meet the criteria defined by the
> > >> follow-standard-deprecation
> > >> > tag.
> > >> >
> > >> > Thoughts?
> > >> >
> > >> > andrea
> > >>
> > >> Are any of those tests used by the interoperability working group
> > >> (formerly DefCore)?
> > >>
> > >>
> > > That's a good question. I was very curious about this because last I
> > > checked keystone had v2.0 calls required for defcore. Looks like that
> might
> > > not be the case anymore [0]? I started a similar thread to this after
> the
> > > PTG since that was something our group talked about extensively during
> the
> > > deprecation session [1].
> > >
> > > ​Yes, it seems no Volume v1 and Keystone v2 tests usage in defcore
> > 2017.01.json [0]​. But there are some compute tests which internally use
> > glance v1 API call [2]. But on mentioned flagged action- Nova already
> moved
> > to v2 APIs and tempest part is pending which can be fixed to make call on
> > v2 APIs only (which can be part of this work and quick).
> >
> > ​From options about deprecated APIs testing, I am with options 2 which
> > really take out the load of Tempest tests maintenance and gate.   ​
> >
> > ​But another question is about stable branch testing of those API, like
> > glance v1 and identity v2 APIs are supported (not deprecated) in Mitaka.
> > ​As Tempest is responsible of testing all stable branch behavior too,
> Should
> >  we keep testing them till all Mitaka EOL (till APIs are in deprecated
> > state in all stable branch) ?​
>
> Excellent point.
>
>
> As far as I can tell:
> - Cinder v1 if I'm not mistaken has been deprecated in Juno, so it's
> deprecated in all supported releases.
> - Glance v1 has been deprecated in Newton, so it's deprecated in all
> supported releases
> - Keystone v2 has been deprecated in Mitaka, so testing *must* stay in
> Tempest until Mitaka EOL, which is in a month from now
>
> We should stop testing these three api versions in the common gate
> including stable branches now (except for keystone v2 on stable/mitaka
> which can run for one more month).
>
> Are cinder / glance / keystone willing to take over the API tests and run
> them in their own gate until removal of the API version?
>
>
> 

Re: [openstack-dev] [api][qa][tc][glance][keystone][cinder] Testing of deprecated API versions

2017-03-10 Thread Lance Bragstad
On Fri, Mar 10, 2017 at 8:49 AM, Andrea Frittoli 
wrote:

>
>
> On Fri, Mar 10, 2017 at 2:24 PM Doug Hellmann 
> wrote:
>
>> Excerpts from Ghanshyam Mann's message of 2017-03-10 10:55:25 +0900:
>> > On Fri, Mar 10, 2017 at 7:23 AM, Lance Bragstad 
>> wrote:
>> >
>> > >
>> > >
>> > > On Thu, Mar 9, 2017 at 3:46 PM, Doug Hellmann 
>> > > wrote:
>> > >
>> > >> Excerpts from Andrea Frittoli's message of 2017-03-09 20:53:54 +:
>> > >> > Hi folks,
>> > >> >
>> > >> > I'm trying to figure out what's the best approach to fade out
>> testing of
>> > >> > deprecated API versions.
>> > >> > We currently host in Tempest API tests for Glance API v1, Keystone
>> API
>> > >> v2
>> > >> > and Cinder API v1.
>> > >> >
>> > >> > According to the guidelines for the "follow-standard-deprecation"
>> tag
>> > >> [0],
>> > >> > when projects that have that tag deprecate a feature:
>> > >> >
>> > >> > "Code will be frozen and only receive minimal maintenance (just so
>> that
>> > >> it
>> > >> > continues to work as-is)."
>> > >> >
>> > >> > I interpret this so that projects should maintain some level of
>> testing
>> > >> of
>> > >> > the deprecated feature, including a deprecated API version.
>> > >> > The QA team does not see value in testing deprecated API versions
>> in the
>> > >> > common gate jobs, so my question is what do to with those tests.
>> > >> >
>> > >> > One option is to maintain them in Tempest until the API version is
>> > >> removed,
>> > >> > and run them in dedicated project jobs.
>> > >> > This means that tempest would have to run those jobs as well, so
>> three
>> > >> > extra jobs, until the API version is removed.
>> > >> >
>> > >> > The other option is to move those tests out of Tempest, into the
>> > >> projects.
>> > >> > This would imply back porting them to all relevant branches as
>> well,
>> > >> but it
>> > >> > would have the advantage of decoupling them from Tempest. It
>> should be
>> > >> no
>> > >> > concern from an API stability POV since the code for that API will
>> be
>> > >> > frozen.
>> > >> > Tests for deprecated APIs in cinder, keystone and glance are all -
>> as
>> > >> far
>> > >> > as I can tell - removed or deprecated from interoperability
>> guidelines,
>> > >> so
>> > >> > moving the tests out of Tempest would not be an issue in that
>> sense.
>> > >> >
>> > >> > The 2nd option involves a bit more initial overhead for the
>> removal of
>> > >> > tests, but I think it would works for the best on the long term.
>> > >> >
>> > >> > There is a 3rd option as well, which is to stop running integration
>> > >> testing
>> > >> > on deprecated API versions before they are actually removed, but I
>> feel
>> > >> > that would not meet the criteria defined by the
>> > >> follow-standard-deprecation
>> > >> > tag.
>> > >> >
>> > >> > Thoughts?
>> > >> >
>> > >> > andrea
>> > >>
>> > >> Are any of those tests used by the interoperability working group
>> > >> (formerly DefCore)?
>> > >>
>> > >>
>> > > That's a good question. I was very curious about this because last I
>> > > checked keystone had v2.0 calls required for defcore. Looks like that
>> might
>> > > not be the case anymore [0]? I started a similar thread to this after
>> the
>> > > PTG since that was something our group talked about extensively
>> during the
>> > > deprecation session [1].
>> > >
>> > > ​Yes, it seems no Volume v1 and Keystone v2 tests usage in defcore
>> > 2017.01.json [0]​. But there are some compute tests which internally use
>> > glance v1 API call [2]. But on mentioned flagged action- Nova already
>> moved
>> > to v2 APIs and tempest part is pending which can be fixed to make call
>> on
>> > v2 APIs only (which can be part of this work and quick).
>> >
>> > ​From options about deprecated APIs testing, I am with options 2 which
>> > really take out the load of Tempest tests maintenance and gate.   ​
>> >
>> > ​But another question is about stable branch testing of those API, like
>> > glance v1 and identity v2 APIs are supported (not deprecated) in Mitaka.
>> > ​As Tempest is responsible of testing all stable branch behavior too,
>> Should
>> >  we keep testing them till all Mitaka EOL (till APIs are in deprecated
>> > state in all stable branch) ?​
>>
>> Excellent point.
>>
>>
> As far as I can tell:
> - Cinder v1 if I'm not mistaken has been deprecated in Juno, so it's
> deprecated in all supported releases.
> - Glance v1 has been deprecated in Newton, so it's deprecated in all
> supported releases
> - Keystone v2 has been deprecated in Mitaka, so testing *must* stay in
> Tempest until Mitaka EOL, which is in a month from now
>
> We should stop testing these three api versions in the common gate
> including stable branches now (except for keystone v2 on stable/mitaka
> which can run for one more month).
>
> Are cinder / glance / keystone willing to take over the API tests and run
> them in their own gate until 

Re: [openstack-dev] [api][qa][tc][glance][keystone][cinder] Testing of deprecated API versions

2017-03-10 Thread Andrea Frittoli
On Fri, Mar 10, 2017 at 2:49 PM Andrea Frittoli 
wrote:

> On Fri, Mar 10, 2017 at 2:24 PM Doug Hellmann 
> wrote:
>
> Excerpts from Ghanshyam Mann's message of 2017-03-10 10:55:25 +0900:
> > On Fri, Mar 10, 2017 at 7:23 AM, Lance Bragstad 
> wrote:
> >
> > >
> > >
> > > On Thu, Mar 9, 2017 at 3:46 PM, Doug Hellmann 
> > > wrote:
> > >
> > >> Excerpts from Andrea Frittoli's message of 2017-03-09 20:53:54 +:
> > >> > Hi folks,
> > >> >
> > >> > I'm trying to figure out what's the best approach to fade out
> testing of
> > >> > deprecated API versions.
> > >> > We currently host in Tempest API tests for Glance API v1, Keystone
> API
> > >> v2
> > >> > and Cinder API v1.
> > >> >
> > >> > According to the guidelines for the "follow-standard-deprecation"
> tag
> > >> [0],
> > >> > when projects that have that tag deprecate a feature:
> > >> >
> > >> > "Code will be frozen and only receive minimal maintenance (just so
> that
> > >> it
> > >> > continues to work as-is)."
> > >> >
> > >> > I interpret this so that projects should maintain some level of
> testing
> > >> of
> > >> > the deprecated feature, including a deprecated API version.
> > >> > The QA team does not see value in testing deprecated API versions
> in the
> > >> > common gate jobs, so my question is what do to with those tests.
> > >> >
> > >> > One option is to maintain them in Tempest until the API version is
> > >> removed,
> > >> > and run them in dedicated project jobs.
> > >> > This means that tempest would have to run those jobs as well, so
> three
> > >> > extra jobs, until the API version is removed.
> > >> >
> > >> > The other option is to move those tests out of Tempest, into the
> > >> projects.
> > >> > This would imply back porting them to all relevant branches as well,
> > >> but it
> > >> > would have the advantage of decoupling them from Tempest. It should
> be
> > >> no
> > >> > concern from an API stability POV since the code for that API will
> be
> > >> > frozen.
> > >> > Tests for deprecated APIs in cinder, keystone and glance are all -
> as
> > >> far
> > >> > as I can tell - removed or deprecated from interoperability
> guidelines,
> > >> so
> > >> > moving the tests out of Tempest would not be an issue in that sense.
> > >> >
> > >> > The 2nd option involves a bit more initial overhead for the removal
> of
> > >> > tests, but I think it would works for the best on the long term.
> > >> >
> > >> > There is a 3rd option as well, which is to stop running integration
> > >> testing
> > >> > on deprecated API versions before they are actually removed, but I
> feel
> > >> > that would not meet the criteria defined by the
> > >> follow-standard-deprecation
> > >> > tag.
> > >> >
> > >> > Thoughts?
> > >> >
> > >> > andrea
> > >>
> > >> Are any of those tests used by the interoperability working group
> > >> (formerly DefCore)?
> > >>
> > >>
> > > That's a good question. I was very curious about this because last I
> > > checked keystone had v2.0 calls required for defcore. Looks like that
> might
> > > not be the case anymore [0]? I started a similar thread to this after
> the
> > > PTG since that was something our group talked about extensively during
> the
> > > deprecation session [1].
> > >
> > > ​Yes, it seems no Volume v1 and Keystone v2 tests usage in defcore
> > 2017.01.json [0]​. But there are some compute tests which internally use
> > glance v1 API call [2]. But on mentioned flagged action- Nova already
> moved
> > to v2 APIs and tempest part is pending which can be fixed to make call on
> > v2 APIs only (which can be part of this work and quick).
> >
> > ​From options about deprecated APIs testing, I am with options 2 which
> > really take out the load of Tempest tests maintenance and gate.   ​
> >
> > ​But another question is about stable branch testing of those API, like
> > glance v1 and identity v2 APIs are supported (not deprecated) in Mitaka.
> > ​As Tempest is responsible of testing all stable branch behavior too,
> Should
> >  we keep testing them till all Mitaka EOL (till APIs are in deprecated
> > state in all stable branch) ?​
>
> Excellent point.
>
>
> As far as I can tell:
> - Cinder v1 if I'm not mistaken has been deprecated in Juno, so it's
> deprecated in all supported releases.
> - Glance v1 has been deprecated in Newton, so it's deprecated in all
> supported releases
>

Of course Glance v1 still has to run on the stable/newton gate jobs, until
Newton EOL (TBD), so tests will stay in Tempest for a cycle more at least.
I guess I shouldn't be sending emails on a Friday afternoon?

The question remains for Cinder and Keystone and in general - how we want
to deal with integration tests for deprecated APIs.

- Keystone v2 has been deprecated in Mitaka, so testing *must* stay in
> Tempest until Mitaka EOL, which is in a month from now
>
> We should stop testing these three api versions in the common gate
> 

Re: [openstack-dev] [api][qa][tc][glance][keystone][cinder] Testing of deprecated API versions

2017-03-10 Thread Ihar Hrachyshka
On Fri, Mar 10, 2017 at 6:49 AM, Andrea Frittoli
 wrote:
> - Glance v1 has been deprecated in Newton, so it's deprecated in all
> supported releases

Glance not maintaining Mitaka branch?

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][glance][keystone][cinder] Testing of deprecated API versions

2017-03-10 Thread Andrea Frittoli
Forgot the links:

Glance v1 deprecation:
https://docs.openstack.org/releasenotes/glance/newton.html#deprecation-notes
Keystone v2 deprecation:
https://docs.openstack.org/releasenotes/keystone/mitaka.html#deprecation-notes
Cinder v1 deprecation:
https://wiki.openstack.org/wiki/ReleaseNotes/Juno#OpenStack_Block_Storage_.28Cinder.29


On Fri, Mar 10, 2017 at 2:49 PM Andrea Frittoli 
wrote:

> On Fri, Mar 10, 2017 at 2:24 PM Doug Hellmann 
> wrote:
>
> Excerpts from Ghanshyam Mann's message of 2017-03-10 10:55:25 +0900:
> > On Fri, Mar 10, 2017 at 7:23 AM, Lance Bragstad 
> wrote:
> >
> > >
> > >
> > > On Thu, Mar 9, 2017 at 3:46 PM, Doug Hellmann 
> > > wrote:
> > >
> > >> Excerpts from Andrea Frittoli's message of 2017-03-09 20:53:54 +:
> > >> > Hi folks,
> > >> >
> > >> > I'm trying to figure out what's the best approach to fade out
> testing of
> > >> > deprecated API versions.
> > >> > We currently host in Tempest API tests for Glance API v1, Keystone
> API
> > >> v2
> > >> > and Cinder API v1.
> > >> >
> > >> > According to the guidelines for the "follow-standard-deprecation"
> tag
> > >> [0],
> > >> > when projects that have that tag deprecate a feature:
> > >> >
> > >> > "Code will be frozen and only receive minimal maintenance (just so
> that
> > >> it
> > >> > continues to work as-is)."
> > >> >
> > >> > I interpret this so that projects should maintain some level of
> testing
> > >> of
> > >> > the deprecated feature, including a deprecated API version.
> > >> > The QA team does not see value in testing deprecated API versions
> in the
> > >> > common gate jobs, so my question is what do to with those tests.
> > >> >
> > >> > One option is to maintain them in Tempest until the API version is
> > >> removed,
> > >> > and run them in dedicated project jobs.
> > >> > This means that tempest would have to run those jobs as well, so
> three
> > >> > extra jobs, until the API version is removed.
> > >> >
> > >> > The other option is to move those tests out of Tempest, into the
> > >> projects.
> > >> > This would imply back porting them to all relevant branches as well,
> > >> but it
> > >> > would have the advantage of decoupling them from Tempest. It should
> be
> > >> no
> > >> > concern from an API stability POV since the code for that API will
> be
> > >> > frozen.
> > >> > Tests for deprecated APIs in cinder, keystone and glance are all -
> as
> > >> far
> > >> > as I can tell - removed or deprecated from interoperability
> guidelines,
> > >> so
> > >> > moving the tests out of Tempest would not be an issue in that sense.
> > >> >
> > >> > The 2nd option involves a bit more initial overhead for the removal
> of
> > >> > tests, but I think it would works for the best on the long term.
> > >> >
> > >> > There is a 3rd option as well, which is to stop running integration
> > >> testing
> > >> > on deprecated API versions before they are actually removed, but I
> feel
> > >> > that would not meet the criteria defined by the
> > >> follow-standard-deprecation
> > >> > tag.
> > >> >
> > >> > Thoughts?
> > >> >
> > >> > andrea
> > >>
> > >> Are any of those tests used by the interoperability working group
> > >> (formerly DefCore)?
> > >>
> > >>
> > > That's a good question. I was very curious about this because last I
> > > checked keystone had v2.0 calls required for defcore. Looks like that
> might
> > > not be the case anymore [0]? I started a similar thread to this after
> the
> > > PTG since that was something our group talked about extensively during
> the
> > > deprecation session [1].
> > >
> > > ​Yes, it seems no Volume v1 and Keystone v2 tests usage in defcore
> > 2017.01.json [0]​. But there are some compute tests which internally use
> > glance v1 API call [2]. But on mentioned flagged action- Nova already
> moved
> > to v2 APIs and tempest part is pending which can be fixed to make call on
> > v2 APIs only (which can be part of this work and quick).
> >
> > ​From options about deprecated APIs testing, I am with options 2 which
> > really take out the load of Tempest tests maintenance and gate.   ​
> >
> > ​But another question is about stable branch testing of those API, like
> > glance v1 and identity v2 APIs are supported (not deprecated) in Mitaka.
> > ​As Tempest is responsible of testing all stable branch behavior too,
> Should
> >  we keep testing them till all Mitaka EOL (till APIs are in deprecated
> > state in all stable branch) ?​
>
> Excellent point.
>
>
> As far as I can tell:
> - Cinder v1 if I'm not mistaken has been deprecated in Juno, so it's
> deprecated in all supported releases.
> - Glance v1 has been deprecated in Newton, so it's deprecated in all
> supported releases
> - Keystone v2 has been deprecated in Mitaka, so testing *must* stay in
> Tempest until Mitaka EOL, which is in a month from now
>
> We should stop testing these three api versions in the common gate
> 

Re: [openstack-dev] [api][qa][tc][glance][keystone][cinder] Testing of deprecated API versions

2017-03-10 Thread Andrea Frittoli
On Fri, Mar 10, 2017 at 2:24 PM Doug Hellmann  wrote:

> Excerpts from Ghanshyam Mann's message of 2017-03-10 10:55:25 +0900:
> > On Fri, Mar 10, 2017 at 7:23 AM, Lance Bragstad 
> wrote:
> >
> > >
> > >
> > > On Thu, Mar 9, 2017 at 3:46 PM, Doug Hellmann 
> > > wrote:
> > >
> > >> Excerpts from Andrea Frittoli's message of 2017-03-09 20:53:54 +:
> > >> > Hi folks,
> > >> >
> > >> > I'm trying to figure out what's the best approach to fade out
> testing of
> > >> > deprecated API versions.
> > >> > We currently host in Tempest API tests for Glance API v1, Keystone
> API
> > >> v2
> > >> > and Cinder API v1.
> > >> >
> > >> > According to the guidelines for the "follow-standard-deprecation"
> tag
> > >> [0],
> > >> > when projects that have that tag deprecate a feature:
> > >> >
> > >> > "Code will be frozen and only receive minimal maintenance (just so
> that
> > >> it
> > >> > continues to work as-is)."
> > >> >
> > >> > I interpret this so that projects should maintain some level of
> testing
> > >> of
> > >> > the deprecated feature, including a deprecated API version.
> > >> > The QA team does not see value in testing deprecated API versions
> in the
> > >> > common gate jobs, so my question is what do to with those tests.
> > >> >
> > >> > One option is to maintain them in Tempest until the API version is
> > >> removed,
> > >> > and run them in dedicated project jobs.
> > >> > This means that tempest would have to run those jobs as well, so
> three
> > >> > extra jobs, until the API version is removed.
> > >> >
> > >> > The other option is to move those tests out of Tempest, into the
> > >> projects.
> > >> > This would imply back porting them to all relevant branches as well,
> > >> but it
> > >> > would have the advantage of decoupling them from Tempest. It should
> be
> > >> no
> > >> > concern from an API stability POV since the code for that API will
> be
> > >> > frozen.
> > >> > Tests for deprecated APIs in cinder, keystone and glance are all -
> as
> > >> far
> > >> > as I can tell - removed or deprecated from interoperability
> guidelines,
> > >> so
> > >> > moving the tests out of Tempest would not be an issue in that sense.
> > >> >
> > >> > The 2nd option involves a bit more initial overhead for the removal
> of
> > >> > tests, but I think it would works for the best on the long term.
> > >> >
> > >> > There is a 3rd option as well, which is to stop running integration
> > >> testing
> > >> > on deprecated API versions before they are actually removed, but I
> feel
> > >> > that would not meet the criteria defined by the
> > >> follow-standard-deprecation
> > >> > tag.
> > >> >
> > >> > Thoughts?
> > >> >
> > >> > andrea
> > >>
> > >> Are any of those tests used by the interoperability working group
> > >> (formerly DefCore)?
> > >>
> > >>
> > > That's a good question. I was very curious about this because last I
> > > checked keystone had v2.0 calls required for defcore. Looks like that
> might
> > > not be the case anymore [0]? I started a similar thread to this after
> the
> > > PTG since that was something our group talked about extensively during
> the
> > > deprecation session [1].
> > >
> > > ​Yes, it seems no Volume v1 and Keystone v2 tests usage in defcore
> > 2017.01.json [0]​. But there are some compute tests which internally use
> > glance v1 API call [2]. But on mentioned flagged action- Nova already
> moved
> > to v2 APIs and tempest part is pending which can be fixed to make call on
> > v2 APIs only (which can be part of this work and quick).
> >
> > ​From options about deprecated APIs testing, I am with options 2 which
> > really take out the load of Tempest tests maintenance and gate.   ​
> >
> > ​But another question is about stable branch testing of those API, like
> > glance v1 and identity v2 APIs are supported (not deprecated) in Mitaka.
> > ​As Tempest is responsible of testing all stable branch behavior too,
> Should
> >  we keep testing them till all Mitaka EOL (till APIs are in deprecated
> > state in all stable branch) ?​
>
> Excellent point.
>
>
As far as I can tell:
- Cinder v1 if I'm not mistaken has been deprecated in Juno, so it's
deprecated in all supported releases.
- Glance v1 has been deprecated in Newton, so it's deprecated in all
supported releases
- Keystone v2 has been deprecated in Mitaka, so testing *must* stay in
Tempest until Mitaka EOL, which is in a month from now

We should stop testing these three api versions in the common gate
including stable branches now (except for keystone v2 on stable/mitaka
which can run for one more month).

Are cinder / glance / keystone willing to take over the API tests and run
them in their own gate until removal of the API version?

Doug
>
> >
> > >
> > > [0] https://github.com/openstack/defcore/blob/master/2017.01.json
> > > [1] http://lists.openstack.org/pipermail/openstack-dev/
> > > 2017-March/113166.html
> > >
> > > ​[2]
> > 

Re: [openstack-dev] [i18n] [nova] understanding log domain change - https://review.openstack.org/#/c/439500

2017-03-10 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2017-03-10 09:28:52 -0500:
> Excerpts from Ian Y. Choi's message of 2017-03-10 01:22:40 +0900:
> > Doug Hellmann wrote on 3/9/2017 9:24 PM:
> > > Excerpts from Sean McGinnis's message of 2017-03-07 07:17:09 -0600:
> > >> On Mon, Mar 06, 2017 at 09:06:18AM -0500, Sean Dague wrote:
> > >>> On 03/06/2017 08:43 AM, Andreas Jaeger wrote:
> >  On 2017-03-06 14:03, Sean Dague  wrote:
> > > I'm trying to understand the implications of
> > > https://review.openstack.org/#/c/439500. And the comment in the linked
> > > email:
> > >
> > > ">> Yes, we decided some time ago to not translate the log files 
> > > anymore and
> > >>> thus our tools do not handle them anymore - and in general, we 
> > >>> remove
> > >>> these kind of files."
> > > Does that mean that all the _LE, _LI, _LW stuff in projects should be
> > > fully removed? Nova currently enforces those things are there -
> > > https://github.com/openstack/nova/blob/e88dd0034b1b135d680dae3494597e295add9cfe/nova/hacking/checks.py#L314-L333
> > > and want to make sure our tools aren't making us do work that the i18n
> > > team is ignoring and throwing away.
> > >> So... just looking for a definitive statement on this since there has
> > >> been some back and forth discussion.
> > >>
> > >> Is it correct to say - all projects may (should?) now remove all bits in
> > >> place for using and enforcing the _Lx() translation markers. Only _()
> > >> should be used for user visible error messages.
> > >>
> > >> Sean (smcginnis)
> > >>
> > > The situation is still not quite clear to me, and it would be
> > > unfortunate to undo too much of the translation support work because
> > > it will be hard to redo it.
> > >
> > > Is there documentation somewhere describing what the i18n team has
> > > committed to trying to translate?
> > 
> > I18n team describes translation plan and priority in Zanata - 
> > translation platform
> >   : https://translate.openstack.org/ .
> > 
> > >   I think I heard that there was a
> > > shift in emphasis to "user interfaces", but I'm not sure if that
> > > includes error messages in services. Should we remove all use of
> > > oslo.i18n from services? Or only when dealing with logs?
> > 
> > When I18n team decided to removal of log translations in Barcelona last 
> > October, there had been no
> > discussion on the removal of oslo.i18n translation support for log messages.
> > (I have kept track of what I18n team discussed during Barcelona I18n 
> > meetup on Etherpad - [1])
> > 
> > Now I think that the final decision of oslo.i18n log translation support 
> > needs more involvement
> > with translators considering oslo.i18n translation support, and also 
> > more people on community wide including
> > project working groups, user committee, and operators as Matt suggested.
> > 
> > If translating log messages is meaningful to some community members and 
> > some translators show interests
> > on translating log messages, then I18n team can revert the policy with 
> > rolling back of translations.
> > Translated strings are still alive in not only previous stable branches, 
> > but also in translation memory in Zanata - translation platform.
> > 
> > I would like to find some ways to discuss this topic with more community 
> > wide.
> 
> I would suggest that we discuss this at the Forum in Boston, but I think
> we need to gather some input before then because if there is a consensus
> that log translations are not useful we can allow the code cleanup to
> occur and not take up face-to-face time.

I've started a thread on the operators mailing list [1].

Doug

[1] 
http://lists.openstack.org/pipermail/openstack-operators/2017-March/012887.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n] [nova] understanding log domain change - https://review.openstack.org/#/c/439500

2017-03-10 Thread Doug Hellmann
Excerpts from Ian Y. Choi's message of 2017-03-10 01:22:40 +0900:
> Doug Hellmann wrote on 3/9/2017 9:24 PM:
> > Excerpts from Sean McGinnis's message of 2017-03-07 07:17:09 -0600:
> >> On Mon, Mar 06, 2017 at 09:06:18AM -0500, Sean Dague wrote:
> >>> On 03/06/2017 08:43 AM, Andreas Jaeger wrote:
>  On 2017-03-06 14:03, Sean Dague  wrote:
> > I'm trying to understand the implications of
> > https://review.openstack.org/#/c/439500. And the comment in the linked
> > email:
> >
> > ">> Yes, we decided some time ago to not translate the log files 
> > anymore and
> >>> thus our tools do not handle them anymore - and in general, we remove
> >>> these kind of files."
> > Does that mean that all the _LE, _LI, _LW stuff in projects should be
> > fully removed? Nova currently enforces those things are there -
> > https://github.com/openstack/nova/blob/e88dd0034b1b135d680dae3494597e295add9cfe/nova/hacking/checks.py#L314-L333
> > and want to make sure our tools aren't making us do work that the i18n
> > team is ignoring and throwing away.
> >> So... just looking for a definitive statement on this since there has
> >> been some back and forth discussion.
> >>
> >> Is it correct to say - all projects may (should?) now remove all bits in
> >> place for using and enforcing the _Lx() translation markers. Only _()
> >> should be used for user visible error messages.
> >>
> >> Sean (smcginnis)
> >>
> > The situation is still not quite clear to me, and it would be
> > unfortunate to undo too much of the translation support work because
> > it will be hard to redo it.
> >
> > Is there documentation somewhere describing what the i18n team has
> > committed to trying to translate?
> 
> I18n team describes translation plan and priority in Zanata - 
> translation platform
>   : https://translate.openstack.org/ .
> 
> >   I think I heard that there was a
> > shift in emphasis to "user interfaces", but I'm not sure if that
> > includes error messages in services. Should we remove all use of
> > oslo.i18n from services? Or only when dealing with logs?
> 
> When I18n team decided to removal of log translations in Barcelona last 
> October, there had been no
> discussion on the removal of oslo.i18n translation support for log messages.
> (I have kept track of what I18n team discussed during Barcelona I18n 
> meetup on Etherpad - [1])
> 
> Now I think that the final decision of oslo.i18n log translation support 
> needs more involvement
> with translators considering oslo.i18n translation support, and also 
> more people on community wide including
> project working groups, user committee, and operators as Matt suggested.
> 
> If translating log messages is meaningful to some community members and 
> some translators show interests
> on translating log messages, then I18n team can revert the policy with 
> rolling back of translations.
> Translated strings are still alive in not only previous stable branches, 
> but also in translation memory in Zanata - translation platform.
> 
> I would like to find some ways to discuss this topic with more community 
> wide.

I would suggest that we discuss this at the Forum in Boston, but I think
we need to gather some input before then because if there is a consensus
that log translations are not useful we can allow the code cleanup to
occur and not take up face-to-face time.

Doug

> 
> 
> With many thanks,
> 
> /Ian
> 
> [1] https://etherpad.openstack.org/p/barcelona-i18n-meetup
> 
> >
> > Doug
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][qa][tc][glance][keystone][cinder] Testing of deprecated API versions

2017-03-10 Thread Doug Hellmann
Excerpts from Ghanshyam Mann's message of 2017-03-10 10:55:25 +0900:
> On Fri, Mar 10, 2017 at 7:23 AM, Lance Bragstad  wrote:
> 
> >
> >
> > On Thu, Mar 9, 2017 at 3:46 PM, Doug Hellmann 
> > wrote:
> >
> >> Excerpts from Andrea Frittoli's message of 2017-03-09 20:53:54 +:
> >> > Hi folks,
> >> >
> >> > I'm trying to figure out what's the best approach to fade out testing of
> >> > deprecated API versions.
> >> > We currently host in Tempest API tests for Glance API v1, Keystone API
> >> v2
> >> > and Cinder API v1.
> >> >
> >> > According to the guidelines for the "follow-standard-deprecation" tag
> >> [0],
> >> > when projects that have that tag deprecate a feature:
> >> >
> >> > "Code will be frozen and only receive minimal maintenance (just so that
> >> it
> >> > continues to work as-is)."
> >> >
> >> > I interpret this so that projects should maintain some level of testing
> >> of
> >> > the deprecated feature, including a deprecated API version.
> >> > The QA team does not see value in testing deprecated API versions in the
> >> > common gate jobs, so my question is what do to with those tests.
> >> >
> >> > One option is to maintain them in Tempest until the API version is
> >> removed,
> >> > and run them in dedicated project jobs.
> >> > This means that tempest would have to run those jobs as well, so three
> >> > extra jobs, until the API version is removed.
> >> >
> >> > The other option is to move those tests out of Tempest, into the
> >> projects.
> >> > This would imply back porting them to all relevant branches as well,
> >> but it
> >> > would have the advantage of decoupling them from Tempest. It should be
> >> no
> >> > concern from an API stability POV since the code for that API will be
> >> > frozen.
> >> > Tests for deprecated APIs in cinder, keystone and glance are all - as
> >> far
> >> > as I can tell - removed or deprecated from interoperability guidelines,
> >> so
> >> > moving the tests out of Tempest would not be an issue in that sense.
> >> >
> >> > The 2nd option involves a bit more initial overhead for the removal of
> >> > tests, but I think it would works for the best on the long term.
> >> >
> >> > There is a 3rd option as well, which is to stop running integration
> >> testing
> >> > on deprecated API versions before they are actually removed, but I feel
> >> > that would not meet the criteria defined by the
> >> follow-standard-deprecation
> >> > tag.
> >> >
> >> > Thoughts?
> >> >
> >> > andrea
> >>
> >> Are any of those tests used by the interoperability working group
> >> (formerly DefCore)?
> >>
> >>
> > That's a good question. I was very curious about this because last I
> > checked keystone had v2.0 calls required for defcore. Looks like that might
> > not be the case anymore [0]? I started a similar thread to this after the
> > PTG since that was something our group talked about extensively during the
> > deprecation session [1].
> >
> > ​Yes, it seems no Volume v1 and Keystone v2 tests usage in defcore
> 2017.01.json [0]​. But there are some compute tests which internally use
> glance v1 API call [2]. But on mentioned flagged action- Nova already moved
> to v2 APIs and tempest part is pending which can be fixed to make call on
> v2 APIs only (which can be part of this work and quick).
> 
> ​From options about deprecated APIs testing, I am with options 2 which
> really take out the load of Tempest tests maintenance and gate.   ​
> 
> ​But another question is about stable branch testing of those API, like
> glance v1 and identity v2 APIs are supported (not deprecated) in Mitaka.
> ​As Tempest is responsible of testing all stable branch behavior too, Should
>  we keep testing them till all Mitaka EOL (till APIs are in deprecated
> state in all stable branch) ?​

Excellent point.

Doug

> 
> >
> > [0] https://github.com/openstack/defcore/blob/master/2017.01.json
> > [1] http://lists.openstack.org/pipermail/openstack-dev/
> > 2017-March/113166.html
> >
> > ​[2]
> https://git.openstack.org/cgit/openstack/defcore/tree/2017.01.json#n294  ​
> 
> >
> >
> >> Doug
> >>
> >> >
> >> > [0]
> >> > https://governance.openstack.org/tc/reference/tags/assert_fo
> >> llows-standard-deprecation.html
> >>
> >> 
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
> >> e
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >

__
OpenStack Development Mailing List (not for 

Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread Davanum Srinivas
On Fri, Mar 10, 2017 at 12:53 AM, Joshua Harlow  wrote:
> Renat Akhmerov wrote:
>>
>>
>>> On 10 Mar 2017, at 06:02, Zane Bitter >> > wrote:
>>>
>>> On 08/03/17 11:23, David Moreau Simard wrote:

 The App Catalog, to me, sounds sort of like a weird message that
 OpenStack somehow requires applications to be
 packaged/installed/deployed differently.
 If anything, perhaps we should spend more effort on advertising that
 OpenStack provides bare metal or virtual compute resources and that
 apps will work just like any other places.
>>>
>>>
>>> Look, it's true that legacy apps from the 90s will run on any VM you
>>> can give them. But the rest of the world has spent the last 15 years
>>> moving on from that. Applications of the future, and increasingly the
>>> present, span multiple VMs/containers, make use of services provided
>>> by the cloud, and interact with their own infrastructure. And users
>>> absolutely will need ways of packaging and deploying them that work
>>> with the underlying infrastructure. Even those apps from the 90s
>>> should be taking advantage of things like e.g. Neutron security
>>> groups, configuration of which is and will always be out of scope for
>>> Docker Hub images.
>>>
>>> So no, we should NOT spend more effort on advertising that we aim to
>>> become to cloud what Subversion is to version control. We've done far
>>> too much of that already IMHO.
>>
>>
>> 100% agree with that.
>>
>> And this whole discussion is taking me to the question: is there really
>> any officially accepted strategy for OpenStack for 1, 3, 5 years?
>
>
> I can propose what I would like for a strategy (it's not more VMs and more
> neutron security groups...), though if it involves (more) design by
> committee, count me out.

Josh,
I'd like to see what you think should be done.

Thanks,
Dims

>
> I honestly believe we have to do the equivalent of a technology leapfrog if
> we actually want to be relevant; but maybe I'm to eager...
>
> Is
>>
>> there any ultimate community goal we’re moving to regardless of
>> underlying technologies (containers, virtualization etc.)? I know we’re
>> now considering various community goals like transition to Python 3.5
>> etc. but these goals don’t tell anything about our future as an IT
>> ecosystem from user perspective. I may assume that I’m just not aware of
>> it. I’d be glad if it was true. I’m eager to know the answers for these
>> questions. Overall, to me it feels like every company in the community
>> just tries to pursue its own short-term (in the best case mid-term)
>> goals without really caring about long-term common goals. So if we say
>> OpenStack is a car then it seems like the wheels of this car are moving
>> in different directions. Again, I’d be glad if it wasn’t true. So maybe
>> some governance needed around setting and achieving ultimate goals of
>> OpenStack? Or if they already exist we need to better explain them and
>> advertise publicly? That in turn IMO could attract more businesses and
>> contributors.
>>
>> Renat Akhmerov
>> @Nokia
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-03-10 Thread Emilien Macchi
Results of the survey:
https://docs.google.com/forms/d/1Vx9qMRhYxYZE5qYKmU6y7Czr4klBLlRS7ARQFLw4UEo/edit#responses
71% to accept the new logo to be the new mascot, once we've
Fondation's feedback on Carlos's proposal. If some iterations are
needed, we'llwork with them to adjust. But accept that a new logo will
replace the original one.
29% to decline any new logo and keep the original one.

Heidi, do we have any update on what Carlos sent before the PTG? Is
there anything we can do to move forward?


Thanks everyone,

On Fri, Feb 17, 2017 at 9:21 AM,   wrote:
> There is no project that can stand on its own.
> Even Swift need some identity management.
>
> Thus, even if you are contributing to only one project your are still 
> dependent on many others. Including QA and infrastructure and so on.
>
> While most Customers are looking on a few projects together and not all 
> projects combined it is still referred to as OpenStack. The release is of 
> openstack.
> There are a lot of features that span many projects and just because a 
> feature is done in one project it is not sufficient for customer needs. HA, 
> upgrade, log consistency are all examples of it.
>
> The strength of openstack is in combination of projects working together.
>
> I will skip topic what is core and what is not.
> I personally think that we did customer and ourselves a big disservice when 
> we abandon integrated release concept for the same reasons I stated above.
> Thanks,
> Arkady
>
> -Original Message-
> From: Dmitry Tantsur [mailto:dtant...@redhat.com]
> Sent: Friday, February 17, 2017 6:31 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help 
> your team?
>
> On 02/17/2017 12:01 AM, Chris Dent wrote:
>> On Thu, 16 Feb 2017, Dan Prince wrote:
>>
>>> And yes. We are all OpenStack developers in a sense. We want to align
>>> things in the technical arena. But I think you'll also find that most
>>> people more closely associate themselves to a team within OpenStack
>>> than they perhaps do with the larger project. Many of us in TripleO
>>> feel that way I think. This is a healthy thing, being part of a team.
>>> Don't make us feel bad because of it by suggesting that uber
>>> OpenStack graphics styling takes precedent.
>>
>> I'd very much like to have a more clear picture of the number of
>> people who think of themselves primarily as "OpenStack developers"
>> or primarily as "$PROJECT developers".
>>
>> I've always assumed that most people in the community(tm) thought of
>> themselves as the former but I'm realizing (in part because of what
>> Dan's said here) that's bias or solipsism on my part and I really have
>> no clue what the situation is.
>>
>> Anyone have a clue?
>
> I don't have a clue, and I don't personally think it matters. But I suspect 
> the latter is the majority. At least because very few contributors have a 
> chance to contribute to something OpenStack-wide, while many people get 
> assigned to work on a project or a few of them.
>
> That being said, I don't believe that the "OpenStack vs $PROJECT" question is 
> as important as it may seem from this thread :)
>
>>
>>
>>
>> __
>>  OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-10 Thread gordon chung


On 10/03/17 12:53 AM, Joshua Harlow wrote:
>
> I can propose what I would like for a strategy (it's not more VMs and
> more neutron security groups...), though if it involves (more) design by
> committee, count me out.
>
> I honestly believe we have to do the equivalent of a technology leapfrog
> if we actually want to be relevant; but maybe I'm to eager...

seems like a manifesto waiting to be penned. probably best in a separate 
thread... not sure with what tag if any. regardless, i think it'd be 
good to see what others long term vision is... maybe it'll help others 
consider what their own expectations are.

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-24, March 13-17

2017-03-10 Thread Thierry Carrez
Welcome to the first release countdown email for the Pike cycle !

Development Focus
-

Teams should be working on specs approval and implementation for
priority features for this cycle.

General Information
---

In case you haven't looked at it yet, you will find the Pike release
schedule at:

https://releases.openstack.org/pike/schedule.html

For things released on a cycle-with-intermediary release model (or
independent model), a quick reminder that generally prefer releases of
libraries and other things with dependencies early in the week, so that
we have plenty of time to fix them in case of breakage. For the same
reasons, we may also postpone processing a release request on a Friday
until the next week.

Stable releases requests will be reviewed by the stable team in addition
to the release team, so there may be additional delays as that team is
short-staffed.

Actions
---

Release liaisons should add their name and contact information to:
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Release_management

New liaisons should familiarize themselves with the release
instructions in the README file in openstack/releases. Follow up
here on the mailing list if you have questions about the expectations
for liaisons for the first milestone.

Some projects have also added their own project-specific deadlines to
the general Pike schedule. If you have something unique, please feel
free to update it by proposing a patch to the openstack/releases
repository (doc/source/pike/schedule.*). There is no need to add a
project-specific deadline that is the same as the global deadline.

Project teams that want to change their release model should do so
before the first milestone, coming up next month on R-20.

Finally, teams should start brainstorming what topics they want to see
covered in cross-community discussions at the Forum at the OpenStack
Summit in Boston in May.

You can find more information about the process in Emilien's recent post
to the ML:
http://lists.openstack.org/pipermail/openstack-dev/2017-March/113115.html

...and links to brainstorming etherpads (or create your own) at:
https://wiki.openstack.org/wiki/Forum/Boston2017

Upcoming Deadlines & Dates
--

Pike-1 milestone: April 13 (R-20 week)
Forum at OpenStack Summit in Boston: May 8-11

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-l2gw] ocata release?

2017-03-10 Thread Alfredo Moralejo Alonso
Hi,

Will you push a new tag release for ocata also?

Best regards,

Alfredo


On Wed, Feb 8, 2017 at 8:55 PM, Sukhdev Kapur  wrote:
> Gary,
>
> Thank you - you da man...
>
> -Sukhdev
>
>
> On Sun, Feb 5, 2017 at 11:27 PM, Gary Kotton  wrote:
>>
>> Done -
>> https://review.openstack.org/gitweb?p=openstack/networking-l2gw.git;a=shortlog;h=refs/heads/stable/ocata
>> A luta continua
>>
>> On 2/6/17, 8:51 AM, "Gary Kotton"  wrote:
>>
>> Hi,
>> Yes, I will go and do this.
>> Thanks
>> Gary
>>
>> On 2/6/17, 4:05 AM, "Takashi Yamamoto"  wrote:
>>
>> hi,
>>
>> is anyone going to cut ocata release/branch for networking-l2gw?
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [API-WG] Schema like aws-sdk and API capabilities discovery

2017-03-10 Thread Gilles Dubreuil

Hi,

On a different list we're talking about improving/new features on the 
client side of OpenStack APIs and this one came up (please see below).


Although API-ref is doing a great job, I believe the question is: Can we 
achieve an equivalent of aws-sdk? For instance could we have each 
project's API to publish its own schema?


I suppose this would fit under the API-WG umbrella. Would that be 
correct? Could someone in the group provide feedback please?


Trying to find current work around this, I can see the API capabilities 
discovery [1] & [2] is great but, it seems to me that part of the 
challenge for the WG is a lack of schema too. Wouldn't it make sense to 
have a standardized way for all services APIs (at least on the public 
side) to publish their schema (including version/microversions details) 
before going any further?


Thanks,
Gilles

[1] https://etherpad.openstack.org/p/ptg-architecture-workgroup
[2] https://review.openstack.org/#/c/386555/




 Forwarded Message 
Subject:Re: Misty 0.2.0
Date:   Fri, 10 Mar 2017 09:19:44 +0100
From:   Ladislav Smola 
To: Gilles Dubreuil 
CC: 	Marek Aufart , Tzu-Mainn Chen 
, Petr Blaho 




Hola,

would be nice to see this moving in the direction of aws-sdk, where it 
just holds API schema (that you would get from OpenStack itself) and 
the whole client is generated from it. But I assume this needs the 
API-ref to be in a good shape.


Ladislav


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [neutron] Should the ironic-neutron meeting start back up for pike?

2017-03-10 Thread Vasyl Saienko
Michael thanks for raising this question.

I don't have strong opinion here. We didn't spent much time for network
related things during past several meetings. But now we have a
'networking-baremetal' repo under our control.
We will need a time to talk about design of our Neutron plugin,
implementation etc... So my feeling that in 1-2 month we will definitely
need more time to talk about networking in Ironic
than today.

On Thu, Mar 9, 2017 at 8:49 PM, Dmitry Tantsur  wrote:

> On 03/07/2017 08:44 PM, Michael Turek wrote:
>
>> Hey all,
>>
>> So at yesterday's ironic IRC meeting the question of whether or not the
>> ironic
>> neutron integration meeting should start back up. My understanding is
>> that this
>> meeting died down as it became more status oriented.
>>
>> I'm wondering if it'd be worthwhile to kick it off again as 4 of pike's
>> high
>> priority items are neutron integration focused.
>>
>> Personally it'd be a meeting I'd attend this cycle but I could understand
>> if
>> it's more trouble than it's worth.
>>
>
> I feel quite the same. I'd find it useful for me to learn from more
> knowledgeable folks, but I'd like to hear their opinion ;)
>
>
>
>> Thoughts?
>>
>> Thanks,
>> Mike
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Core team changes

2017-03-10 Thread Daniel P. Berrange
On Thu, Mar 09, 2017 at 05:14:08PM -0600, Matt Riedemann wrote:
> I wanted to let everyone know of some various core team changes within Nova.
> 
> nova-core
> -
> 
> I've discussed it with and removed Daniel Berrange (danpb) and Michael Still
> (mikal) from the nova-core team. Both are busy working on other projects and
> have been for awhile now, and I wanted to have the list reflect that
> reality. I'm sure both would have a short on-ramp to get back in should the
> situation change.
> 
> nova-specs-core
> ---
> 
> I've also removed Dan and Michael from nova-specs-core for the same reasons.
> 
> I've added Jay Pipes (jaypipes) and Sylvain Bauza (bauzas) to the
> nova-specs-core team. This was probably a long time coming. Both are very
> influential in the project and the direction and priorities from release to
> release.
> 
> nova-stable-maint
> -
> 
> During the PTG I added Sylvain to the nova-stable-maint core team. Sylvain
> knows the rules about the stable branch support phases and has a keen eye
> for what's appropriate and what's not for a backport.
> 
> --
> 
> Thank you to Daniel and Michael for everything they've done for Nova over
> the years and I hope them the best in their current work.  And thank you to
> Jay and Sylvain for the continuing work that you're doing to keep moving
> Nova forward.

FYI, I am also going to remove myself from os-vif core for the same reasons.
There are still seven other os-vif core members who are doing a fine job
at dealing with ongoing work there.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Mistral Custom Actions API Design

2017-03-10 Thread Renat Akhmerov

> On 10 Mar 2017, at 15:09, Dougal Matthews  wrote:
> 
> On 10 March 2017 at 04:22, Renat Akhmerov  > wrote:
> Hi,
> 
> I probably like the base class approach better too.
> 
> However, I’m trying to understand if we need this variety of classes.
> Do we need a separate class for asynchronous actions? IMO, since is_sync() is 
> just an instance method that can potentially return both True and False based 
> on the instance state shouldn’t be introduced by a special class. Otherwise 
> it’s confusing that a classes declared as AsyncAction can actually be 
> synchronous (if its is_sync() returns True). So maybe we should just leave 
> this method in the base class.
> I”m also wondering if we should just always pass “context” into run() method. 
> Those action implementations that don’t need it could just ignore it. Not 
> sure though.
> This is a good point. I had originally thought it would be backwards 
> incompatible to make this change - however, users will need to update their 
> actions to inherit from mistral-lib so they will need to opt in. Then in 
> mistral we can do something like...
> 
> if isinstance(action, mistral_lib.Action):
> action.run(ctx)
> else:
> # deprecation warning about action now inheriting from mistral_lib and 
> taking a context etc.
> action.run()

Yes, right.

> As far as mixin approach, I’d say I’d be ok with having mixing for 
> context-based actions. Although, like Dougal said, it may be a little harder 
> to read, this approach gives a huge flexibility for long term. Imagine if we 
> want to have a class of actions that some different kind of information. Just 
> making it up… For example, some of my actions need to be aware of some 
> policies (Congress-like) or information about metrics of the current 
> operating system (this is probably a bad example because it’s easy to use 
> standard Python modules but I’m just trying to illustrate the idea). In this 
> case we could have PolicyMixin and OperatingSystemMixin that would set 
> required info into the instance state or provide with handle interfaces for 
> more advanced uses.
> 
> I like the idea of mixins if we can see a future with many small components 
> that can be included in an action class. However, like you I didn't manage to 
> think of any real examples.
> 
> It should be possible to migrate to a mixin approach later if we have the 
> need.


Well, I didn’t manage to find real use cases probably because I don’t develop 
lots of actions :) Although the example with policies seems almost real to me. 
This is something that was raised several times during design sessions in the 
past. Anyway, I agree with you that seems like we can add mixins later if we 
want to. I don’t see any reasons now why not.


Renat Akhmerov
@Nokia


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 - are weready?

2017-03-10 Thread Syed Armani
Folks,

I am going to ask the question raised by Zane one more time:

Is there a migration plan for Heat users who have existing stacks
containing the v1 resources?

Cheers,
Syed

On Thu, Aug 25, 2016 at 7:10 PM, Assaf Muller  wrote:

> On Thu, Aug 25, 2016 at 7:35 AM, Gary Kotton  wrote:
> > Hi,
> > At the moment it is still not clear to me the upgrade process from V1 to
> V2. The migration script https://review.openstack.org/#/c/289595/ has yet
> to be approved. Does this support all drivers or is this just the default
> reference implementation driver?
>
> The migration script doesn't have a test, so we really have no idea if
> it's going to work.
>
> > Are there people still using V1?
> > Thanks
> > Gary
> >
> > On 8/25/16, 4:25 AM, "Doug Wiegley" 
> wrote:
> >
> >
> > > On Mar 23, 2016, at 4:17 PM, Doug Wiegley <
> doug...@parksidesoftware.com> wrote:
> > >
> > > Migration script has been submitted, v1 is not going anywhere from
> stable/liberty or stable/mitaka, so it’s about to disappear from master.
> > >
> > > I’m thinking in this order:
> > >
> > > - remove jenkins jobs
> > > - wait for heat to remove their jenkins jobs ([heat] added to this
> thread, so they see this coming before the job breaks)
> > > - remove q-lbaas from devstack, and any references to lbaas v1 in
> devstack-gate or infra defaults.
> > > - remove v1 code from neutron-lbaas
> >
> > FYI, all of the above have completed, and the final removal is in
> the merge queue: https://review.openstack.org/#/c/286381/
> >
> > Mitaka will be the last stable branch with lbaas v1.
> >
> > Thanks,
> > doug
> >
> > >
> > > Since newton is now open for commits, this process is going to get
> started.
> > >
> > > Thanks,
> > > doug
> > >
> > >
> > >
> > >> On Mar 8, 2016, at 11:36 AM, Eichberger, German <
> german.eichber...@hpe.com> wrote:
> > >>
> > >> Yes, it’s Database only — though we changed the agent driver in
> the DB from V1 to V2 — so if you bring up a V2 with that database it should
> reschedule all your load balancers on the V2 agent driver.
> > >>
> > >> German
> > >>
> > >>
> > >>
> > >>
> > >> On 3/8/16, 3:13 AM, "Samuel Bercovici" 
> wrote:
> > >>
> > >>> So this looks like only a database migration, right?
> > >>>
> > >>> -Original Message-
> > >>> From: Eichberger, German [mailto:german.eichber...@hpe.com]
> > >>> Sent: Tuesday, March 08, 2016 12:28 AM
> > >>> To: OpenStack Development Mailing List (not for usage questions)
> > >>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 -
> are weready?
> > >>>
> > >>> Ok, for what it’s worth we have contributed our migration
> script: https://review.openstack.org/#/c/289595/ — please look at this as
> a starting point and feel free to fix potential problems…
> > >>>
> > >>> Thanks,
> > >>> German
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> On 3/7/16, 11:00 AM, "Samuel Bercovici" 
> wrote:
> > >>>
> >  As far as I recall, you can specify the VIP in creating the LB
> so you will end up with same IPs.
> > 
> >  -Original Message-
> >  From: Eichberger, German [mailto:german.eichber...@hpe.com]
> >  Sent: Monday, March 07, 2016 8:30 PM
> >  To: OpenStack Development Mailing List (not for usage questions)
> >  Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1
> - are weready?
> > 
> >  Hi Sam,
> > 
> >  So if you have some 3rd party hardware you only need to change
> the
> >  database (your steps 1-5) since the 3rd party hardware will
> just keep
> >  load balancing…
> > 
> >  Now for Kevin’s case with the namespace driver:
> >  You would need a 6th step to reschedule the loadbalancers with
> the V2 namespace driver — which can be done.
> > 
> >  If we want to migrate to Octavia or (from one LB provider to
> another) it might be better to use the following steps:
> > 
> >  1. Download LBaaS v1 information (Tenants, Flavors, VIPs,
> Pools, Health
> >  Monitors , Members) into some JSON format file(s) 2. Delete
> LBaaS v1 3.
> >  Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON
> format
> >  file into some scripts which recreate the load balancers with
> your
> >  provider of choice —
> > 
> >  6. Run those scripts
> > 
> >  The problem I see is that we will probably end up with
> different VIPs
> >  so the end user would need to change their IPs…
> > 
> >  Thanks,
> >  German
> > 
> > 
> > 
> >  On 3/6/16, 5:35 AM, "Samuel Bercovici" 

Re: [openstack-dev] [ironic] Removal of the ipminative / pyghmi driver

2017-03-10 Thread Pavlo Shchelokovskyy
Hi all,

small note for future - we could re-evaluate possibility of testing this
driver on gates with VMs when qemu version with BMC support (plus
[python-]libvirt support for such functionality) is generally available in
distros.

In this case we could stop using virtualbmc and use qemu natively. Not sure
how valuable such testing would be, but it won't for sure be pyghmi talking
to pyghmi.

Cheers,

Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com

On Thu, Mar 9, 2017 at 8:40 PM, Dmitry Tantsur  wrote:

> On 03/09/2017 07:19 PM, Jay Faulkner wrote:
>
>> Hi all,
>>
>> The ipminative driver Is currently an anomaly in ironic’s tree, despite
>> the driver being initially deprecated in Newton[1], and   our desire to
>> drop them reiterated on the mailing list in December[2], it was has not
>> been removed from the tree prior to the Ocata release.
>>
>> At the PTG the ironic team had a short discussion about the ipminative
>> (aka pyghmi) driver — the conclusion was that unless third party CI was run
>> against the driver, we would be forced to follow through on the deprecation
>> and remove it. Testing in upstream CI, against VirtualBMC, was mostly
>> rejected due to both the ipminative driver and virtualbmc using the same
>> python ipmi library (pyghmi), and therefore not being a valid test case.
>> Additionally, further adding urgency to the removal, several active ironic
>> contributors who have tested ipminative drivers in real-world environments
>> have reported them as unstable.
>>
>> The promise of a native python driver to talk to ipmi in ironic is great,
>> but without proper testing and stability, keeping it in-tree does more harm
>> to ironic users than good — in fact, there’s very little indication to a
>> deployer using ironic that the driver may not work stably.
>>
>> Therefore, I’m giving the mailing list a two week warning — unless
>> volunteers come willing to run third party CI against the ipminative
>> drivers in the next two weeks, I will be submitting a patch to remove them
>> entirely from the tree. The driver could then be moved into
>> ironic-staging-drivers by any interested contributors.
>>
>
> Thanks for writing it Jay. Indeed, we spent too much time waiting for
> someone to overtake this driver. I'm very much in favor of removing it, if
> somebody does not step up *right now*.
>
>
>
>> -
>> Jay Faulkner
>> OSIC
>>
>> Related-bug: https://bugs.launchpad.net/ironic/+bug/1671532
>>
>> [1] https://docs.openstack.org/releasenotes/ironic/newton.html
>> [2] http://lists.openstack.org/pipermail/openstack-dev/2016-Dece
>> mber/108666.html
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Mistral Custom Actions API Design

2017-03-10 Thread Dougal Matthews
On 10 March 2017 at 04:22, Renat Akhmerov  wrote:

> Hi,
>
> I probably like the base class approach better too.
>
> However, I’m trying to understand if we need this variety of classes.
>
>- Do we need a separate class for asynchronous actions? IMO, since
>is_sync() is just an instance method that can potentially return both True
>and False based on the instance state shouldn’t be introduced by a special
>class. Otherwise it’s confusing that a classes declared as AsyncAction can
>actually be synchronous (if its is_sync() returns True). So maybe we should
>just leave this method in the base class.
>- I”m also wondering if we should just always pass “context” into
>run() method. Those action implementations that don’t need it could just
>ignore it. Not sure though.
>
> This is a good point. I had originally thought it would be backwards
incompatible to make this change - however, users will need to update their
actions to inherit from mistral-lib so they will need to opt in. Then in
mistral we can do something like...

if isinstance(action, mistral_lib.Action):
action.run(ctx)
else:
# deprecation warning about action now inheriting from mistral_lib and
taking a context etc.
action.run()

So just having one class would really be the simplest approach.


As far as mixin approach, I’d say I’d be ok with having mixing for
> context-based actions. Although, like Dougal said, it may be a little
> harder to read, this approach gives a huge flexibility for long term.
> Imagine if we want to have a class of actions that some different kind of
> information. Just making it up… For example, some of my actions need to be
> aware of some policies (Congress-like) or information about metrics of the
> current operating system (this is probably a bad example because it’s easy
> to use standard Python modules but I’m just trying to illustrate the idea).
> In this case we could have PolicyMixin and OperatingSystemMixin that would
> set required info into the instance state or provide with handle interfaces
> for more advanced uses.
>

I like the idea of mixins if we can see a future with many small components
that can be included in an action class. However, like you I didn't manage
to think of any real examples.

It should be possible to migrate to a mixin approach later if we have the
need.


What do you think?
>
> Renat Akhmerov
> @Nokia
>
> On 9 Mar 2017, at 11:35, Ryan Brady  wrote:
>
> At the PTG and previous discussions in IRC, I mentioned there were two
> different design ideas I had for the developer experience for custom action
> development in mistral-lib.  The purpose and intent behind the patch[1] was
> discussed in person at the PTG and that was helpful for me wrt to scope.  I
> feel it would be helpful to discuss and decide together the final piece of
> this patch.  I'd like to get any feedback on either of these two ideas as
> they will shape how developers integrate with Mistral in the future, impact
> our OpenStack integration efforts in mistral-extra.  Nothing stops a
> developer from adopting either style in their custom action libraries, but
> most will likely want to remain consistent with style present in the
> upstream code.
>
> I have created separate declaration and usage examples in hopes of
> illustrating some of the similarities and differences.  To me it seems the
> base class example is more declarative/explicit, but the mixin example is
> more extensible and dry.  Both examples reflect on backwards compatibility
> and possible changes to how mistral checks for sync/async actions and how
> to pass the context (as needed by actions that integrate with OpenStack).
>
>
> base classes declaration: https://gist.github.com/rbrady/
> ff86c484e8e6e53ba2dc3dfa17b01b09
>
> base class usage: https://gist.github.com/rbrady/
> 716a02fb2bd38d822c6df8bd642d3ea6
>
> mixins declaration: https://gist.github.com/rbrady/
> d30ae640b19df658a17cd93827125678
>
> mixins usage: https://gist.github.com/rbrady/
> 248cb52d5c5f94854d8c76eee911ce8e
>
>
> Thanks,
>
> Ryan
>
> --
> Ryan Brady
> Cloud Engineering
> rbr...@redhat.com
> 919.890.8925 <(919)%20890-8925>
>
>
> [1] https://review.openstack.org/#/c/411412/
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: