[openstack-dev] 答复: [daisycloud-core] Agenda for IRC meeting Aug. 5 2016

2016-08-04 Thread hu . zhijiang
Forgot to mentioned , the channel has been changed from #daisycloud to 
#openstack-meeting

B.R.,
Zhijiang




发件人: HuZhiJiang180967/user/zte_ltd
收件人: hu.zhiji...@zte.com.cn, huzhiji...@gmail.com, 
lu.yao...@zte.com.cn, zhou...@zte.com.cn, sun.jin...@zte.com.cn, 
kong.w...@zte.com.cn, janonymous.codevult...@gmail.com, 
WuWei10166727/user/zte_ltd@ZTE_LTD, jiang.zhif...@zte.com.cn, 
yangjian...@chinamobile.com, jianfei.zh...@nokia.com, 
suvendu.mi...@nokia.com, 
抄送:   "OpenStack Development Mailing List (not for usage questions)" 

日期:   2016-08-05 12:40
主题:   [openstack-dev] [daisycloud-core] Agenda for IRC meeting Aug. 5 
2016


1) Roll Call
2) Daisy Call Graph doc and CI doc
3) Git tag sync between Kolla code and images
4) Getting backend type and its use cases
5) Image 2.0.3 login problem
6) Daisy4nfv related status update


B.R.,
Zhijiang


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [daisycloud-core] Agenda for IRC meeting Aug. 5 2016

2016-08-04 Thread hu . zhijiang
1) Roll Call
2) Daisy Call Graph doc and CI doc
3) Git tag sync between Kolla code and images
4) Getting backend type and its use cases
5) Image 2.0.3 login problem
6) Daisy4nfv related status update


B.R.,
Zhijiang


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-04 Thread Hongbin Lu
I disagree as well. From users perspective, Glare looks like a Glance API v3. 
Having Glare and Glance as two independent services is quite confusing. I guess 
you would get a lot of questions like "what is the difference between Glance 
and Glare?", "does Glance depends on Glare?", "is Glare a backend of Glance?", 
etc.

Also, if Glare is splitted out as an independent service, it will be hard for 
other services to depend on Glare since few people are willing to depend on a 
new service, not to mention the risk that Glare might become a persistently 
single-vendor project thus having problems to join the big-tent. In comparison, 
if Glare is under Glance, it will have less adoption problems.

Best regards,
Hongbin

> -Original Message-
> From: Fox, Kevin M [mailto:kevin@pnnl.gov]
> Sent: August-04-16 3:21 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Glance][TC][Heat][App-
> Catalog][Murano][Tacker] Glare as a new Project
> 
> I disagree. I see glare as a superset of the needs of the image api and
> one feature I need thats image related was specifically shot down as
> "the artefact api will solve that".
> 
> You have all the same needs to version/catalog/store images. They are
> not more special then a versioned/cataloged/stored heat templates,
> murano apps, tuskar workflows, etc. I've heard multiple times, members
> of the glance team saying  that once glare is fully mature, they could
> stub out the v1/v2 glance apis on top of glare. What is the benefit to
> splitting if the end goal is to recombine/make one project irrelevant?
> 
> This feels like to me, another case of an established, original tent
> project not wanting to deal with something that needs to be dealt with,
> and instead pushing it out to another project with the hope that it
> just goes away. With all the traction non original tent projects have
> gotten since the big tent was established, that might be an accurate
> conclusion, but really bad for users/operators of OpenStack.
> 
> I really would like glance/glare to reconsider this stance. OpenStack
> continuously budding off projects is not a good pattern.
> 
> Thanks,
> Kevin
> 
> From: Erno Kuvaja [ekuv...@redhat.com]
> Sent: Thursday, August 04, 2016 10:34 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Glance][TC][Heat][App-
> Catalog][Murano][Tacker] Glare as a new Project
> 
> On Thu, Aug 4, 2016 at 5:20 PM, Clint Byrum  wrote:
> > Excerpts from Tim Bell's message of 2016-08-04 15:55:48 +:
> >>
> >> On 04 Aug 2016, at 17:27, Mikhail Fedosin
> > wrote:
> >> >
> >> > Hi all,
> >> > > > after 6 months of Glare v1 API development we have decided to
> >> > > > continue
> >> > our work in a separate project in the "openstack" namespace with
> >> > its own core team (me, Kairat Kushaev, Darja Shkhray and the
> >> > original creator - Alexander Tivelkov). We want to thank Glance
> >> > community for their support during the incubation period, valuable
> >> > advice and suggestions - this time was really productive for us. I
> >> > believe that this step will allow the Glare project to concentrate
> >> > on feature development and move forward faster. Having the
> >> > independent service also removes inconsistencies in understanding
> >> > what Glance project is: it seems that a single project cannot own
> >> > two different APIs with partially overlapping functionality. So
> >> > with the separation of Glare into a new project, Glance may
> >> > continue its work on the OpenStack Images API, while Glare will
> become the reference implementation of the new OpenStack Artifacts API.
> >> >
> >>
> >> I would suggest looking at more than just the development process
> >> when reflecting on this choice.
> >> While it may allow more rapid development, doing on your own will
> >> increase costs for end users and operators in areas like packaging,
> >> configuration, monitoring, quota ... gaining critical mass in
> >> production for Glare will be much more difficult if you are not
> building on the Glance install base.
> >
> > I have to agree with Tim here. I respect that it's difficult to build
> > on top of Glance's API, rather than just start fresh. But, for
> > operators, it's more services, more API's to audit, and more
> > complexity. For users, they'll now have two ways to upload software
> to
> > their clouds, which is likely to result in a large portion just
> > ignoring Glare even when it would be useful for them.
> >
> > What I'd hoped when Glare and Glance combined, was that there would
> be
> > a single API that could be used for any software upload and listing.
> > Is there any kind of retrospective or documentation somewhere that
> > explains why that wasn't possible?
> >
> 
> I was planning to leave this branch on it's own, but I have to 

Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread joehuang
I think all the problem is caused by the definition "official OpenStack 
project" for one big-tent project.

I understand that each OpenStack vendor wants some differentiation in their 
solution, while also would
like to collaborate with common core projects.

If we replace the title "official OpenStack project" to "OpenStack ecosystem 
player", and make "big-tent"
as "ecosystem play yard" ( no close roof ), TCs can put more focus on 
governance of core projects
(current non-big-tent projects), and provide a more open place to grow abundant 
ecosystem. The bigger the 
ecosystem is, the more scenario and demand for cloud operators(public, private, 
hybrid) can be fulfilled
with different choices, and the competition will also help to grow the best one.

Best Regards
Chaoyi Huang (joehuang)


From: Erno Kuvaja [ekuv...@redhat.com]
Sent: 05 August 2016 1:15
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] persistently single-vendor projects

On Thu, Aug 4, 2016 at 9:56 AM, Duncan Thomas  wrote:
> On 1 August 2016 at 18:14, Adrian Otto  wrote:
>>
>> I am struggling to understand why we would want to remove projects from
>> our big tent at all, as long as they are being actively developed under the
>> principles of "four opens". It seems to me that working to disqualify such
>> projects sends an alarming signal to our ecosystem. The reason we made the
>> big tent to begin with was to set a tone of inclusion. This whole discussion
>> seems like a step backward. What problem are we trying to solve, exactly?
>
>
> Any project existing in the big tent sets a significant barrier (policy,
> technical, mindshare) of entry to any competing project that might spring
> up. The cost of entry as an individual into a single-vendor project is much
> higher in general than a diverse one (back-channel communications,
> differences in vision, monoculture, commercial pressures, etc), and so
> having a non-diverse project in the big tent reduces the possibilities of a
> better replacement appearing.
>

Actually I couldn't disagree more. Since big tent and stackforge move
under the openstack name space the effect has been exactly the
opposite. Competitors have way less needs to collaborate with each
other to be part of OpenStack as anyone could just kick up their own
project and do it in their way still being part of the
community/ecosystem/what-ever-you-want-to-call-it.

We see projects splitting more when they do not share the core
concepts (which is good thing) but we do not see projects combining
their efforts when they do overlapping things. Maybe we do see this
lack of diversity just growing as long as we don't care about it (tag
here, another there is not going to slow the company behind the
project pushing it to their customers even there was more diverse or
better options, it's still part of OpenStack and it's "ours"). If we
start pushing the projects that are single vendor out of the big tent,
we give more pressure for multiple of those to combine their efforts
rather than continue competing for same thing and if they don't want
to play together I don't see anything wrong to send clear message that
we don't want to share the cost of it.

I personally see the proposed as not limiting the competition to
appear but rather the single-vendor competition might not stick around
when the competing projects would be under a threat to be thrown out.
If someone brings competing project into the ecosystem, 18 months is
also pretty decent time to see if that approach is superior enough (to
attract the diversity) and justify it's existence or if those people
just should try to play with others instead of doing their own thing.

I'm all for the selective inclusion based on meritocracy, not only on
person level, but on project level as well.

- Erno

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Progress on overcloud upgrade / update jobs

2016-08-04 Thread Emilien Macchi
Hi,

I'm currently working by iteration to get a new upstream job that test
upgrades and update.
Until now, I'm doing baby steps. I bootstrapped the work to upgrade
undercloud, see https> ://review.openstack.org/#/c/346995/ for details
(it's almost working hitting a packaging issue now).

Now I am interested by having 2 overcloud jobs:

- update: Newton -> Newton: basically, we already have it with
gate-tripleo-ci-centos-7-ovb-upgrades - but my proposal is to use
multinode work that James started.
I have a PoC (2 lines of code):
https://review.openstack.org/#/c/351330/1 that works, it deploys an
overcloud using packaging, applies the patch in THT and run overcloud
update. I tested it and it works fine, (I tried to break Keystone).
Right now the job name is
gate-tripleo-ci-centos-7-nonha-multinode-upgrades-nv because I took
example from the existing ovb job that does the exact same thing.
I propose to rename it to
gate-tripleo-ci-centos-7-nonha-multinode-updates-nv. What do you
think?

- upgrade: Mitaka -> Newton: I haven't started anything yet but the
idea is to test the upgrade from stable to master, using multinode job
now (not ovb).
I can prototype something but I would like to hear from our community before.

Please give some feedback if you are interested by this work and I
will spend some time during the next weeks on $topic.

Note: please also look my thread about undercloud upgrade job, I need
your feedback too.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] devstack changing to neutron by default RSN

2016-08-04 Thread Armando M.
On 4 August 2016 at 11:35, Sean Dague  wrote:

> One of the cycle goals in newton was to get devstack over to neutron by
> default. Neutron is used by 90+% of our users, and nova network is
> deprecated, and is not long for this world.
>
> Because devstack is used by developers as well as by out test
> infrastructure, the major stumbling block was coming up with a good
> working default on a machine with only 1 interface, that doesn't leave
> that interface in a totally broken state if you reboot the box (noting
> that ovs changes are persistent by default, but brctl ones are not).
>
> We think we've come up with a model that works. It's here -
> https://review.openstack.org/#/c/350750/. And while it's surprisingly
> short, it took a lot of thinking this one through to get there.
>
> The crux of it is that we trust the value of PUBLIC_INTERFACE in a new
> way on the neutron side. It is now unset by default (logic was changed
> in n-net to keep things right there), and if set, then we assume you
> really want neutron to manage this physical interface.
>
> If not, that's cool. We're automatically creating a bridge interface
> (with no physical interfaces in it) and managing that. For single node
> testing this works fine. It passes all the tempest tests[1]. The only
> thing that's really weird in this setup is that because there is no
> physical interface in that bridge, there is no path to the outside world
> from guests. That means no package updates on them.
>
> We address that with an iptables masq rule. It's a little cheaty pants,
> however of the options we've got, it didn't seem so bad. (Note: if you
> have a better option and are willing to get knee deep in solving it,
> please do so. More contributors the better.)
>
> It's going to take a bit for docs to all roll over here, but I think we
> actually want this out sooner rather than later to find any other edge
> cases that it introduces. There will be some bumpiness here. However,
> being able to bring up a full neutron with only the 4 passwords
> specified in config is quite nice.
>
> 1. actually 5 tests fail for unrelated reasons, which is that tempest
> isn't probably excluding tests for services that aren't running because
> it makes some assumptions on the gate config. That will be fixed soon.
>
> -Sean
>
>
So glad we are finally within the grasp of this!

I posted [1], just to err on the side of caution and get the opportunity to
see how other gate jobs for Neutron might be affected by this change.

Are there any devstack-gate changes lined up too that we should be aware of?

Cheers,
Armando

[1] https://review.openstack.org/#/c/351450/


> --
> Sean Dague
> http://dague.net
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-04 Thread Jay Pipes

On 08/04/2016 06:40 PM, Clint Byrum wrote:

Excerpts from Jay Pipes's message of 2016-08-04 18:14:46 -0400:

On 08/04/2016 05:30 PM, Clint Byrum wrote:

Excerpts from Fox, Kevin M's message of 2016-08-04 19:20:43 +:

I disagree. I see glare as a superset of the needs of the image api and one feature I 
need thats image related was specifically shot down as "the artefact api will solve 
that".

You have all the same needs to version/catalog/store images. They are not more 
special then a versioned/cataloged/stored heat templates, murano apps, tuskar 
workflows, etc. I've heard multiple times, members of the glance team saying  
that once glare is fully mature, they could stub out the v1/v2 glance apis on 
top of glare. What is the benefit to splitting if the end goal is to 
recombine/make one project irrelevant?

This feels like to me, another case of an established, original tent project 
not wanting to deal with something that needs to be dealt with, and instead 
pushing it out to another project with the hope that it just goes away. With 
all the traction non original tent projects have gotten since the big tent was 
established, that might be an accurate conclusion, but really bad for 
users/operators of OpenStack.

I really would like glance/glare to reconsider this stance. OpenStack 
continuously budding off projects is not a good pattern.



So very this.


Honestly, operators need to move past the "oh, not another service to
install/configure" thing.

With the whole "microservice the world" movement, that ship has long
since sailed, and frankly, the cost of adding another microservice into
the deployment at this point is tiny -- it should be nothing more than a
few lines in a Puppet manifest, Chef module, Ansible playbook, or Salt
state file.

If you're doing deployment right, adding new services to the
microservice architecture that OpenStack projects are being pushed
towards should not be an issue.

I find it odd that certain folks are pushing hard for the
shared-nothing, microservice-it-all software architecture and yet
support this mentality that adding another couple (dozen if need be)
lines of configuration data to a deployment script is beyond the pale to
ask of operators.



Agreed, deployment isn't that big of a deal. I actually thought Kevin's
point was that the lack of focus was the problem. I think the point in
bringing up deployment is simply that it isn't free, not that it's the
reason to combine the two.


My above statement was more directed to Kevin and Tim, both of whom 
indicated that adding another service to the deployment was a major problem.



It's clear there's been a disconnect in expectations between the outside
and inside of development.

The hope from the outside was that we'd end up with a user friendly
frontend API to artifacts, that included more capability for cataloging
images.  It sounds like the two teams never actually shared that vision
and remained two teams, instead of combining into one under a shared
vision.

Thanks for all your hard work, Glance and Glare teams. I don't think
any of us can push a vision on you. But, as Kevin says above: consider
addressing the lack of vision and cooperation head on, rather than
turning your backs on each-other. The users will sing your praises if
you can get it done.


It's been three years, two pre-big-tent TC graduation reviews (one for a
split out murano app catalog, one for the combined project team being
all things artifact), and over that three years, the original Glance
project has at times crawled to a near total stop from a contribution
perspective and not indicated much desire to incorporate the generic
artifacts API or code. Time for this cooperation came and went with
ample opportunities.

The Glare project is moving on.


The point is that this should be reconsidered, and that these internal
problems, now surfaced, seem surmountable if there's actually a reason
to get past them. Since it seems from the start, Glare and Glance never
actually intended to converge on a generic artifacts API, but rather
to simply tolerate one another (back when I supported their merging,
I never thought this would be the case), then of course, it wasn't going
to go well.

But, if I look at this from a user perspective, if I do want to use
anything other than images as cloud artifacts, the story is pretty
confusing.


Actually, I beg to differ. A unified OpenStack Artifacts API, long-term, 
will be more user-friendly and less confusing since a single API can be 
used for various kinds of similar artifacts -- images, Heat templates, 
Tosca flows, Murano app manifests, maybe Solum things, maybe eventually 
Nova flavor-like things, etc.


That would leave the Glance project team to focus on glance_store, 
stabilizing storage drivers and maybe working on utilities to transform 
image formats and/or provide some unified incremental diff'ing solution 
for both container and VM images now that they no longer need to deal 
with a 

Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-04 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2016-08-04 18:14:46 -0400:
> On 08/04/2016 05:30 PM, Clint Byrum wrote:
> > Excerpts from Fox, Kevin M's message of 2016-08-04 19:20:43 +:
> >> I disagree. I see glare as a superset of the needs of the image api and 
> >> one feature I need thats image related was specifically shot down as "the 
> >> artefact api will solve that".
> >>
> >> You have all the same needs to version/catalog/store images. They are not 
> >> more special then a versioned/cataloged/stored heat templates, murano 
> >> apps, tuskar workflows, etc. I've heard multiple times, members of the 
> >> glance team saying  that once glare is fully mature, they could stub out 
> >> the v1/v2 glance apis on top of glare. What is the benefit to splitting if 
> >> the end goal is to recombine/make one project irrelevant?
> >>
> >> This feels like to me, another case of an established, original tent 
> >> project not wanting to deal with something that needs to be dealt with, 
> >> and instead pushing it out to another project with the hope that it just 
> >> goes away. With all the traction non original tent projects have gotten 
> >> since the big tent was established, that might be an accurate conclusion, 
> >> but really bad for users/operators of OpenStack.
> >>
> >> I really would like glance/glare to reconsider this stance. OpenStack 
> >> continuously budding off projects is not a good pattern.
> >>
> >
> > So very this.
> 
> Honestly, operators need to move past the "oh, not another service to 
> install/configure" thing.
> 
> With the whole "microservice the world" movement, that ship has long 
> since sailed, and frankly, the cost of adding another microservice into 
> the deployment at this point is tiny -- it should be nothing more than a 
> few lines in a Puppet manifest, Chef module, Ansible playbook, or Salt 
> state file.
> 
> If you're doing deployment right, adding new services to the 
> microservice architecture that OpenStack projects are being pushed 
> towards should not be an issue.
> 
> I find it odd that certain folks are pushing hard for the 
> shared-nothing, microservice-it-all software architecture and yet 
> support this mentality that adding another couple (dozen if need be) 
> lines of configuration data to a deployment script is beyond the pale to 
> ask of operators.
> 

Agreed, deployment isn't that big of a deal. I actually thought Kevin's
point was that the lack of focus was the problem. I think the point in
bringing up deployment is simply that it isn't free, not that it's the
reason to combine the two.

> > It's clear there's been a disconnect in expectations between the outside
> > and inside of development.
> >
> > The hope from the outside was that we'd end up with a user friendly
> > frontend API to artifacts, that included more capability for cataloging
> > images.  It sounds like the two teams never actually shared that vision
> > and remained two teams, instead of combining into one under a shared
> > vision.
> >
> > Thanks for all your hard work, Glance and Glare teams. I don't think
> > any of us can push a vision on you. But, as Kevin says above: consider
> > addressing the lack of vision and cooperation head on, rather than
> > turning your backs on each-other. The users will sing your praises if
> > you can get it done.
> 
> It's been three years, two pre-big-tent TC graduation reviews (one for a 
> split out murano app catalog, one for the combined project team being 
> all things artifact), and over that three years, the original Glance 
> project has at times crawled to a near total stop from a contribution 
> perspective and not indicated much desire to incorporate the generic 
> artifacts API or code. Time for this cooperation came and went with 
> ample opportunities.
> 
> The Glare project is moving on.
> 

The point is that this should be reconsidered, and that these internal
problems, now surfaced, seem surmountable if there's actually a reason
to get past them. Since it seems from the start, Glare and Glance never
actually intended to converge on a generic artifacts API, but rather
to simply tolerate one another (back when I supported their merging,
I never thought this would be the case), then of course, it wasn't going
to go well.

But, if I look at this from a user perspective, if I do want to use
anything other than images as cloud artifacts, the story is pretty
confusing.

Anyway, it's done, and I think we should take it as a lesson that team
mergers are complicated social activities, not technical ones, and so
they should be handled with care.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-04 Thread Jay Pipes

On 08/04/2016 05:30 PM, Clint Byrum wrote:

Excerpts from Fox, Kevin M's message of 2016-08-04 19:20:43 +:

I disagree. I see glare as a superset of the needs of the image api and one feature I 
need thats image related was specifically shot down as "the artefact api will solve 
that".

You have all the same needs to version/catalog/store images. They are not more 
special then a versioned/cataloged/stored heat templates, murano apps, tuskar 
workflows, etc. I've heard multiple times, members of the glance team saying  
that once glare is fully mature, they could stub out the v1/v2 glance apis on 
top of glare. What is the benefit to splitting if the end goal is to 
recombine/make one project irrelevant?

This feels like to me, another case of an established, original tent project 
not wanting to deal with something that needs to be dealt with, and instead 
pushing it out to another project with the hope that it just goes away. With 
all the traction non original tent projects have gotten since the big tent was 
established, that might be an accurate conclusion, but really bad for 
users/operators of OpenStack.

I really would like glance/glare to reconsider this stance. OpenStack 
continuously budding off projects is not a good pattern.



So very this.


Honestly, operators need to move past the "oh, not another service to 
install/configure" thing.


With the whole "microservice the world" movement, that ship has long 
since sailed, and frankly, the cost of adding another microservice into 
the deployment at this point is tiny -- it should be nothing more than a 
few lines in a Puppet manifest, Chef module, Ansible playbook, or Salt 
state file.


If you're doing deployment right, adding new services to the 
microservice architecture that OpenStack projects are being pushed 
towards should not be an issue.


I find it odd that certain folks are pushing hard for the 
shared-nothing, microservice-it-all software architecture and yet 
support this mentality that adding another couple (dozen if need be) 
lines of configuration data to a deployment script is beyond the pale to 
ask of operators.



It's clear there's been a disconnect in expectations between the outside
and inside of development.

The hope from the outside was that we'd end up with a user friendly
frontend API to artifacts, that included more capability for cataloging
images.  It sounds like the two teams never actually shared that vision
and remained two teams, instead of combining into one under a shared
vision.

Thanks for all your hard work, Glance and Glare teams. I don't think
any of us can push a vision on you. But, as Kevin says above: consider
addressing the lack of vision and cooperation head on, rather than
turning your backs on each-other. The users will sing your praises if
you can get it done.


It's been three years, two pre-big-tent TC graduation reviews (one for a 
split out murano app catalog, one for the combined project team being 
all things artifact), and over that three years, the original Glance 
project has at times crawled to a near total stop from a contribution 
perspective and not indicated much desire to incorporate the generic 
artifacts API or code. Time for this cooperation came and went with 
ample opportunities.


The Glare project is moving on.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-04 Thread Dan Prince
Last week I started some prototype work on what could be a new way to
install the Undercloud. The driving force behind this was some of the
recent "composable services" work we've done in TripleO so initially I
called in composable undercloud. There is an etherpad here with links
to some of the patches already posted upstream (many of which stand as
general imporovements on their own outside the scope of what I'm
talking about here).

https://etherpad.openstack.org/p/tripleo-composable-undercloud

The idea in short is that we could spin up a small single process all-
in-one heat-all (engine and API) and thereby avoid things like Rabbit,
and MySQL. Then we can use Heat templates to drive the Undercloud
deployment just like we do in the Overcloud.

I created a short video demonstration which goes over some of the
history behind the approach, and shows a live demo of all of this
working with the patches above:

https://www.youtube.com/watch?v=y1qMDLAf26Q

Thoughts? Would it be cool to have a session to discuss this more in
Barcelona?

Dan Prince (dprince)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-04 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2016-08-04 19:20:43 +:
> I disagree. I see glare as a superset of the needs of the image api and one 
> feature I need thats image related was specifically shot down as "the 
> artefact api will solve that".
> 
> You have all the same needs to version/catalog/store images. They are not 
> more special then a versioned/cataloged/stored heat templates, murano apps, 
> tuskar workflows, etc. I've heard multiple times, members of the glance team 
> saying  that once glare is fully mature, they could stub out the v1/v2 glance 
> apis on top of glare. What is the benefit to splitting if the end goal is to 
> recombine/make one project irrelevant?
> 
> This feels like to me, another case of an established, original tent project 
> not wanting to deal with something that needs to be dealt with, and instead 
> pushing it out to another project with the hope that it just goes away. With 
> all the traction non original tent projects have gotten since the big tent 
> was established, that might be an accurate conclusion, but really bad for 
> users/operators of OpenStack. 
> 
> I really would like glance/glare to reconsider this stance. OpenStack 
> continuously budding off projects is not a good pattern.
> 

So very this.

It's clear there's been a disconnect in expectations between the outside
and inside of development.

The hope from the outside was that we'd end up with a user friendly
frontend API to artifacts, that included more capability for cataloging
images.  It sounds like the two teams never actually shared that vision
and remained two teams, instead of combining into one under a shared
vision.

Thanks for all your hard work, Glance and Glare teams. I don't think
any of us can push a vision on you. But, as Kevin says above: consider
addressing the lack of vision and cooperation head on, rather than
turning your backs on each-other. The users will sing your praises if
you can get it done.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project mascots update

2016-08-04 Thread Steven Dake (stdake)


From: Heidi Joy Tretheway 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, August 4, 2016 at 9:13 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] Project mascots update

Steve Dake asked: "Heidi, I received a related question from a Kolla core 
reviewer. Do the project teams have input into the mascot design? If this has 
already been discussed on the list, I may have missed it."

HJ: While we haven't discussed this on the ML, we referenced a bit in the web 
page. Our intent is to create a family of logos that have a distinctive design 
style that is unique to OpenStack community projects. That illustration style 
is something my colleague Todd Morey (OSF creative director) has been working 
on with several professional illustrators, choosing the best of the bunch, and 
developing a handful of logos that will serve as guideposts for the rest of 
design.

We don't have illustration resources sufficient to seek team input on 
individual designs-that's why we emphasized team input on the mascot concept. 
(As I told Steve Hardy, I'm happy to talk to anyone who has special requests on 
how their mascot is portrayed.) Also, while nearly everyone has an opinion 
about design, those opinions will be varied (and often contradictory). I hope 
you'll be willing to trust the excellent design leadership that produced our 
Summit designs, craveable shirts and stickers, and OpenStack's evolving visual 
identity. That said, I'll be thrilled to hear from anyone with opinions on 
design to find out more about your perspective. I love talking about this stuff.



Heidi,

Of course we trust the excellent design leadership that has produced all of the 
great creative work that the OpenStack foundation has produced representing our 
visual identity.  It was a question that came up in our team meeting and one of 
our core reviewers wanted an answer (and I wasn't sure).  Thanks for answering 
:)

Regards
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] stable/liberty busted

2016-08-04 Thread Kevin Benton
Hitless restart logic in the agent.

On Aug 4, 2016 14:07, "Rick Jones"  wrote:

> On 08/04/2016 01:39 PM, Kevin Benton wrote:
>
>> Yep. Some tests are making sure there are no packets lost. Some are
>> making sure that stuff starts working eventually.
>>
>
> Not to be pedantic, but what sort of requirement exists that no packets be
> lost?
>
> rick
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] stable/liberty busted

2016-08-04 Thread Rick Jones

On 08/04/2016 01:39 PM, Kevin Benton wrote:

Yep. Some tests are making sure there are no packets lost. Some are
making sure that stuff starts working eventually.


Not to be pedantic, but what sort of requirement exists that no packets 
be lost?


rick


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] stable/liberty busted

2016-08-04 Thread Kevin Benton
Yep. Some tests are making sure there are no packets lost. Some are making
sure that stuff starts working eventually.

On Aug 4, 2016 12:28, "Brian Haley"  wrote:

> On 08/04/2016 03:16 PM, Rick Jones wrote:
>
>> On 08/04/2016 12:04 PM, Kevin Benton wrote:
>>
>>> Yeah, I wasn't thinking when I +2'ed that. There are two use cases for
>>> the pinger, one for ensuring continuous connectivity and one for
>>> eventual connectivity.
>>>
>>> I think the revert is okay for a quick fix, but we really need a new
>>> argument to the pinger for strictness to decide which behavior the test
>>> is looking for.
>>>
>>
>> What situation requires continuous connectivity?
>>
>
> Maybe the test names can answer that:
>
> test_assert_pings_during_br_int_setup_not_lost()
> _test_assert_pings_during_br_phys_setup_not_lost()
>
> In other cases we want the previous behavior - is that IP alive?  It's
> probably just best to put the old code back and make a new
> assert_async_ping() based on this code.
>
> -Brian
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Chris Friesen

On 08/04/2016 01:52 PM, Jay Faulkner wrote:


Ironic does have radosgw support, and it's documented here:
http://docs.openstack.org/developer/ironic/deploy/radosgw.html -- clearly it's
not "first class" as we don't validate it in CI like we do with swift, but the
code exists and I believe we have users out in the wild.


If it's not validated in CI, it's broken.  At least that's my experience with 
nova and cinder...


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some thoughts on API microversions

2016-08-04 Thread Jay Pipes

On 08/04/2016 01:17 PM, Chris Friesen wrote:

On 08/04/2016 09:28 AM, Edward Leafe wrote:


The idea that by specifying a distinct microversion would somehow
guarantee
an immutable behavior, though, is simply not the case. We discussed
this at
length at the midcycle regarding the dropping of the nova-network
code; once
that's dropped, there won't be any way to get that behavior no matter
what
microversion you specify. It's gone. We signal this with deprecation
notices,
release notes, etc., and it's up to individuals to move away from
using that
behavior during this deprecation period. A new microversion will never
help
anyone who doesn't follow these signals.


I was unable to attend the midcycle, but that seems to violate the
original description of how microversions were supposed to work.  As I
recall, the original intent was something like this:

At time T, we remove an API via microversion X.  We keep the code around
to support it when using microversions less than X.

At some later time T+i, we bump the minimum microversion up to X.  At
this point nobody can ever request the older microversions, so we can
safely remove the server-side code.

Have we given up on this?  Or is nova-network a special-case?


This is how Ironic works with microversions today, yes. However, in Nova 
we've unfortunately taken the policy that we will probably *never* bump 
the minimum microversion.


I personally find this extremely distasteful as I hate all the crap that 
needs to sit around along with all the conditionals in the code that 
have to do with continuing to support old behaviour.


If it were up to me, the Nova project would just say to operators and 
library/SDK develpers: if you want feature foo, then the tradeoff is 
that the minimum microversion is going up to X. Operators can choose to 
continue on the old code or indicate to their users that they are 
running a minimum newer version of the Compute API and users will need 
to use a library that passes that minimum version header at least.


IMHO we have gone out of our way to cater to mythical users who under no 
circumstances should ever be affected by changes in an API. Enough is 
enough. It's time we took back some control and cleaned up a bunch of 
technical debt and poor API design vestigial tails by raising the 
minimum microversion of the Compute API.


And no, the above isn't saying "to hell with our users". It's a simple 
statement that we cannot be beholden to a small minority of users, 
however vocal, that wish that nothing would ever change. These users can 
continue to deploy CentOS4 or Ubuntu 10.04 or libvirt 0.9.8 [1] if they 
wish and not upgrade OpenStack, but that shouldn't mean that we as a 
project cannot start tidying up our own house.


OK, frustration vented... I'm heading back in for reviews on cleaning up 
unit test cruft in the image backend that Matt Booth pushed.


Best,
-jay

[1] Hmm, interesting that Nova requires a minimum libvirt version of 
1.2.1 nowadays. So, we're happy to say to operators "in order to use 
modern Nova you need to upgrade to 1.2.1 libvirt" but we aren't willing 
to tell those same operators "upgrading to this version of Nova will 
mean your users will have to make a small change to the way they 
interact with the Compute API". Really, this doesn't jive with me.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Fox, Kevin M
Awesome. Thanks for letting me know. :) This does raise it back up on my 
priority list a bit, as i know its more likely to work now with less effort.

Thanks,
Kevin

From: Jay Faulkner [j...@jvf.cc]
Sent: Thursday, August 04, 2016 12:52 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] persistently single-vendor projects


On Aug 4, 2016, at 12:43 PM, Fox, Kevin M 
> wrote:

The problem is, OpenStack is a very fractured landscape. It takes significant 
amounts of time for an operator to deploy "one more service".

So, I spent a while deploying Trove, got it 90% working, then discovered Trove 
didn't work with RadosGW. RadosGW was a done deal long ago, and couldn't be 
re-evaluated at that point. (Plus you cant have more then one swift endpoint in 
a cloud...). So, for now, I'm supporting a 90% functional Trove.

If I went and installed Ironic tomorrow, would it work with the radosgw I 
already have? I have no idea. The, "it supports swift" implies but doesn't 
answer. If I want to consider deploying it now, I have to block out even more 
time to experiment in order to try. and then do a bunch of manual testing to 
verify.


Ironic does have radosgw support, and it's documented here: 
http://docs.openstack.org/developer/ironic/deploy/radosgw.html -- clearly it's 
not "first class" as we don't validate it in CI like we do with swift, but the 
code exists and I believe we have users out in the wild.

I know this is orthogonal to the discussion, but I wanted someone seeing this 
thread to know it does work :).

Thanks,
Jay Faulkner
OSIC

This kind of thing makes it even harder on operators to deploy new services.

Yes, it could be solved at the Ceph level, where they deploy a complete 
OpenStack with all the advanced services and test everything, but OpenStack is 
already doing that. It is significantly easier for OpenStack to test it instead 
of Ceph.

Thanks,
Kevin





From: Ben Swartzlander [b...@swartzlander.org]
Sent: Thursday, August 04, 2016 12:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] persistently single-vendor projects

On 08/04/2016 03:02 PM, Fox, Kevin M wrote:
Nope. The incompatibility was for things that never were in radosgw, not things 
that regressed over time. tmpurls differences and the namespacing things were 
there since the beginning first introduced.

At the last summit, I started with the DefCore folks and worked backwards until 
someone said, no we won't ever add tests for compatibility for that because 
radosgw is not an OpenStack project and we only test OpenStack.

Yes, I think thats a terrible thing. I'm just relaying the message I got.

I don't see how this is terrible at all. If someone were to start up a
clone of another OpenStack project (say, Cinder) which aimed for 100%
API compatibility with Cinder, but outside the tent, and then they
somehow failed to achieve true compatibility because of Cinder's
undocumented details, nobody would proclaim that the this was somehow
our (the OpenStack community's) fault.

I think the Radosgw people probably have a legitimate beef with the
Swift team about the lack of an official API spec that they can code do,
but that's a choice for the Swift community to make. If users of Swift
are satisfied with a the-code-is-the-spec stance then I say good luck to
them.

If the user community cares enough about interoperability between
swift-like things they will demand an API spec and conformance tests and
someone will write those and then radosgw will have something to conform
to. None of this has anything to do with the governance model for Ceph
though.

-Ben Swartzlander



Thanks,
Kevin

From: Ben Swartzlander [b...@swartzlander.org]
Sent: Thursday, August 04, 2016 10:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] persistently single-vendor projects

On 08/04/2016 11:57 AM, Fox, Kevin M wrote:
Ok. I'll play devils advocate here and speak to the other side of this, because 
you raised an interesting issue...

Ceph is outside of the tent. It provides a (mostly) api compatible 
implementation of the swift api (radosgw), and it is commonly used in OpenStack 
deployments.

Other OpenStack projects don't take it into account because its not a big tent 
thing, even though it is very common. Because of some rules about only testing 
OpenStack things, radosgw is not tested against even though it is so common.

I call BS on this assertion. We test things that outside the tent in the
upstream gate all the time -- the only requirement is that they be
released. We won't test against unreleased stuff that's outside the big
tent and the reason for that should be obvious.

This 

Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-04 Thread Erno Kuvaja
That's fine, I might have an idea who have been saying so, but that
statement has never been the consensus even among the Glance core
team. In fct you would be well aware of this if you did follow the
pretty colorful discussion around glance spec.

We also have not had any intention to provide versioned images as that
would break our immutability promise (the image data will stay
consistent to the id through the image life once it has turned to
"active" state, as in you will always get same data, if any, out when
you do image-download to specific image ID).

So no I don't see us recombine these projects in any foreseeable
future and I'd be perfectly fine if some future day Glance becomes
obsolete because Glare does it's job better.

What comes to the reconsidering the stance, I'd love to see the
community being able to work out the issues and do the development
together. On the other hand I've been telling to Mike for quite a
while that they really should consider spinning Glare off. In reality
this will be much better for the Glare future development, the teams
who have been waiting desperately for last couple of cycles to get
consuming it and it will also make bit of room for Glance community to
fully focus on the things that are important to the Glance consumers
rather than trying to divide the attention.

- Erno

On Thu, Aug 4, 2016 at 8:20 PM, Fox, Kevin M  wrote:
> I disagree. I see glare as a superset of the needs of the image api and one 
> feature I need thats image related was specifically shot down as "the 
> artefact api will solve that".
>
> You have all the same needs to version/catalog/store images. They are not 
> more special then a versioned/cataloged/stored heat templates, murano apps, 
> tuskar workflows, etc. I've heard multiple times, members of the glance team 
> saying  that once glare is fully mature, they could stub out the v1/v2 glance 
> apis on top of glare. What is the benefit to splitting if the end goal is to 
> recombine/make one project irrelevant?
>
> This feels like to me, another case of an established, original tent project 
> not wanting to deal with something that needs to be dealt with, and instead 
> pushing it out to another project with the hope that it just goes away. With 
> all the traction non original tent projects have gotten since the big tent 
> was established, that might be an accurate conclusion, but really bad for 
> users/operators of OpenStack.
>
> I really would like glance/glare to reconsider this stance. OpenStack 
> continuously budding off projects is not a good pattern.
>
> Thanks,
> Kevin
> 
> From: Erno Kuvaja [ekuv...@redhat.com]
> Sent: Thursday, August 04, 2016 10:34 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] 
> Glare as a new Project
>
> On Thu, Aug 4, 2016 at 5:20 PM, Clint Byrum  wrote:
>> Excerpts from Tim Bell's message of 2016-08-04 15:55:48 +:
>>>
>>> On 04 Aug 2016, at 17:27, Mikhail Fedosin 
>>> > wrote:
>>> >
>>> > Hi all,
>>> > > > after 6 months of Glare v1 API development we have decided to continue
>>> > our work in a separate project in the "openstack" namespace with its own
>>> > core team (me, Kairat Kushaev, Darja Shkhray and the original creator -
>>> > Alexander Tivelkov). We want to thank Glance community for their support
>>> > during the incubation period, valuable advice and suggestions - this time
>>> > was really productive for us. I believe that this step will allow the
>>> > Glare project to concentrate on feature development and move forward
>>> > faster. Having the independent service also removes inconsistencies
>>> > in understanding what Glance project is: it seems that a single project
>>> > cannot own two different APIs with partially overlapping functionality. So
>>> > with the separation of Glare into a new project, Glance may continue its
>>> > work on the OpenStack Images API, while Glare will become the reference
>>> > implementation of the new OpenStack Artifacts API.
>>> >
>>>
>>> I would suggest looking at more than just the development process when
>>> reflecting on this choice.
>>> While it may allow more rapid development, doing on your own will increase
>>> costs for end users and operators in areas like packaging, configuration,
>>> monitoring, quota … gaining critical mass in production for Glare will
>>> be much more difficult if you are not building on the Glance install base.
>>
>> I have to agree with Tim here. I respect that it's difficult to build on
>> top of Glance's API, rather than just start fresh. But, for operators,
>> it's more services, more API's to audit, and more complexity. For users,
>> they'll now have two ways to upload software to their clouds, which is
>> likely to result in a large portion just ignoring Glare even when it
>> 

Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Jay Faulkner

On Aug 4, 2016, at 12:43 PM, Fox, Kevin M 
> wrote:

The problem is, OpenStack is a very fractured landscape. It takes significant 
amounts of time for an operator to deploy "one more service".

So, I spent a while deploying Trove, got it 90% working, then discovered Trove 
didn't work with RadosGW. RadosGW was a done deal long ago, and couldn't be 
re-evaluated at that point. (Plus you cant have more then one swift endpoint in 
a cloud...). So, for now, I'm supporting a 90% functional Trove.

If I went and installed Ironic tomorrow, would it work with the radosgw I 
already have? I have no idea. The, "it supports swift" implies but doesn't 
answer. If I want to consider deploying it now, I have to block out even more 
time to experiment in order to try. and then do a bunch of manual testing to 
verify.


Ironic does have radosgw support, and it's documented here: 
http://docs.openstack.org/developer/ironic/deploy/radosgw.html -- clearly it's 
not "first class" as we don't validate it in CI like we do with swift, but the 
code exists and I believe we have users out in the wild.

I know this is orthogonal to the discussion, but I wanted someone seeing this 
thread to know it does work :).

Thanks,
Jay Faulkner
OSIC

This kind of thing makes it even harder on operators to deploy new services.

Yes, it could be solved at the Ceph level, where they deploy a complete 
OpenStack with all the advanced services and test everything, but OpenStack is 
already doing that. It is significantly easier for OpenStack to test it instead 
of Ceph.

Thanks,
Kevin





From: Ben Swartzlander [b...@swartzlander.org]
Sent: Thursday, August 04, 2016 12:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] persistently single-vendor projects

On 08/04/2016 03:02 PM, Fox, Kevin M wrote:
Nope. The incompatibility was for things that never were in radosgw, not things 
that regressed over time. tmpurls differences and the namespacing things were 
there since the beginning first introduced.

At the last summit, I started with the DefCore folks and worked backwards until 
someone said, no we won't ever add tests for compatibility for that because 
radosgw is not an OpenStack project and we only test OpenStack.

Yes, I think thats a terrible thing. I'm just relaying the message I got.

I don't see how this is terrible at all. If someone were to start up a
clone of another OpenStack project (say, Cinder) which aimed for 100%
API compatibility with Cinder, but outside the tent, and then they
somehow failed to achieve true compatibility because of Cinder's
undocumented details, nobody would proclaim that the this was somehow
our (the OpenStack community's) fault.

I think the Radosgw people probably have a legitimate beef with the
Swift team about the lack of an official API spec that they can code do,
but that's a choice for the Swift community to make. If users of Swift
are satisfied with a the-code-is-the-spec stance then I say good luck to
them.

If the user community cares enough about interoperability between
swift-like things they will demand an API spec and conformance tests and
someone will write those and then radosgw will have something to conform
to. None of this has anything to do with the governance model for Ceph
though.

-Ben Swartzlander



Thanks,
Kevin

From: Ben Swartzlander [b...@swartzlander.org]
Sent: Thursday, August 04, 2016 10:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] persistently single-vendor projects

On 08/04/2016 11:57 AM, Fox, Kevin M wrote:
Ok. I'll play devils advocate here and speak to the other side of this, because 
you raised an interesting issue...

Ceph is outside of the tent. It provides a (mostly) api compatible 
implementation of the swift api (radosgw), and it is commonly used in OpenStack 
deployments.

Other OpenStack projects don't take it into account because its not a big tent 
thing, even though it is very common. Because of some rules about only testing 
OpenStack things, radosgw is not tested against even though it is so common.

I call BS on this assertion. We test things that outside the tent in the
upstream gate all the time -- the only requirement is that they be
released. We won't test against unreleased stuff that's outside the big
tent and the reason for that should be obvious.

This causes odd breakages at times that could easily be prevented, but for 
procedural things around the Big Tent.

The only way I can see for "odd breakages" to sneak in is on the Ceph
side, if they aren't testing their changes against OpenStack and they
introduce a regression, then that's their fault (assuming of course that
we have good test coverage running against the latest stable release of
Ceph). It's 

Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-04 Thread Fox, Kevin M
I get that. I don't blame Mirantis for that either. Thats a really difficult 
place to be in.

This is the kind of silo disconnect I've been talking about though, where users 
of a project have real concrete needs, and the project doesn't listen as they 
think the current solution is "good enough" for everyone.

And another case where, this is probably the right call to make at the 
project(s) level under the circumstances in order to make progress again. But 
probably should be solved at a general OpenStack Architecture level or the TC 
so that technical architecture requirements and user feedback can be 
incorporated into the process rather then split on non technical reason, making 
the operator/user experience worse, not better for OpenStack as a whole.

Thanks,
Kevin

From: Ian Cordasco [sigmaviru...@gmail.com]
Sent: Thursday, August 04, 2016 11:47 AM
To: Tim Bell; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] 
Glare as a new Project

-Original Message-
From: Tim Bell 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: August 4, 2016 at 13:19:02
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] 
Glare as a new Project

>
> > On 04 Aug 2016, at 19:34, Erno Kuvaja wrote:
> >
> > On Thu, Aug 4, 2016 at 5:20 PM, Clint Byrum wrote:
> >> Excerpts from Tim Bell's message of 2016-08-04 15:55:48 +:
> >>>
> >>> On 04 Aug 2016, at 17:27, Mikhail Fedosin >
> wrote:
> 
>  Hi all,
> >> after 6 months of Glare v1 API development we have decided to continue
>  our work in a separate project in the "openstack" namespace with its own
>  core team (me, Kairat Kushaev, Darja Shkhray and the original creator -
>  Alexander Tivelkov). We want to thank Glance community for their support
>  during the incubation period, valuable advice and suggestions - this time
>  was really productive for us. I believe that this step will allow the
>  Glare project to concentrate on feature development and move forward
>  faster. Having the independent service also removes inconsistencies
>  in understanding what Glance project is: it seems that a single project
>  cannot own two different APIs with partially overlapping functionality. 
>  So
>  with the separation of Glare into a new project, Glance may continue its
>  work on the OpenStack Images API, while Glare will become the reference
>  implementation of the new OpenStack Artifacts API.
> 
> >>>
> >>> I would suggest looking at more than just the development process when
> >>> reflecting on this choice.
> >>> While it may allow more rapid development, doing on your own will increase
> >>> costs for end users and operators in areas like packaging, configuration,
> >>> monitoring, quota … gaining critical mass in production for Glare will
> >>> be much more difficult if you are not building on the Glance install base.
> >>
> >> I have to agree with Tim here. I respect that it's difficult to build on
> >> top of Glance's API, rather than just start fresh. But, for operators,
> >> it's more services, more API's to audit, and more complexity. For users,
> >> they'll now have two ways to upload software to their clouds, which is
> >> likely to result in a large portion just ignoring Glare even when it
> >> would be useful for them.
> >>
> >> What I'd hoped when Glare and Glance combined, was that there would be
> >> a single API that could be used for any software upload and listing. Is
> >> there any kind of retrospective or documentation somewhere that explains
> >> why that wasn't possible?
> >>
> >
> > I was planning to leave this branch on it's own, but I have to correct
> > something here. This split is not introducing new API, it's moving the
> > new Artifact API under it's own project, there was no shared API in
> > first place. Glare was to be it's own service already within Glance
> > project. Also the Artifacts API turned out to be fundamentally
> > incompatible with the Images APIs v1 & v2 due to the totally different
> > requirements. And even the option was discussed in the community I
> > personally think replicating Images API and carrying the cost it being
> > in two services that are fundamentally different would have been huge
> > mistake we would have paid for long time. I'm not saying that it would
> > have been impossible, but there is lots of burden in Images APIs that
> > Glare really does not need to carry, we just can't get rid of it and
> > likely no-one would have been happy to see Images API v3 around the
> > time when we are working super hard to get the v1 users moving to v2.
> >
> > Packaging glance-api, glance-registry and glare-api from 

Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Fox, Kevin M
The problem is, OpenStack is a very fractured landscape. It takes significant 
amounts of time for an operator to deploy "one more service".

So, I spent a while deploying Trove, got it 90% working, then discovered Trove 
didn't work with RadosGW. RadosGW was a done deal long ago, and couldn't be 
re-evaluated at that point. (Plus you cant have more then one swift endpoint in 
a cloud...). So, for now, I'm supporting a 90% functional Trove.

If I went and installed Ironic tomorrow, would it work with the radosgw I 
already have? I have no idea. The, "it supports swift" implies but doesn't 
answer. If I want to consider deploying it now, I have to block out even more 
time to experiment in order to try. and then do a bunch of manual testing to 
verify.

This kind of thing makes it even harder on operators to deploy new services.

Yes, it could be solved at the Ceph level, where they deploy a complete 
OpenStack with all the advanced services and test everything, but OpenStack is 
already doing that. It is significantly easier for OpenStack to test it instead 
of Ceph.

Thanks,
Kevin





From: Ben Swartzlander [b...@swartzlander.org]
Sent: Thursday, August 04, 2016 12:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] persistently single-vendor projects

On 08/04/2016 03:02 PM, Fox, Kevin M wrote:
> Nope. The incompatibility was for things that never were in radosgw, not 
> things that regressed over time. tmpurls differences and the namespacing 
> things were there since the beginning first introduced.
>
> At the last summit, I started with the DefCore folks and worked backwards 
> until someone said, no we won't ever add tests for compatibility for that 
> because radosgw is not an OpenStack project and we only test OpenStack.
>
> Yes, I think thats a terrible thing. I'm just relaying the message I got.

I don't see how this is terrible at all. If someone were to start up a
clone of another OpenStack project (say, Cinder) which aimed for 100%
API compatibility with Cinder, but outside the tent, and then they
somehow failed to achieve true compatibility because of Cinder's
undocumented details, nobody would proclaim that the this was somehow
our (the OpenStack community's) fault.

I think the Radosgw people probably have a legitimate beef with the
Swift team about the lack of an official API spec that they can code do,
but that's a choice for the Swift community to make. If users of Swift
are satisfied with a the-code-is-the-spec stance then I say good luck to
them.

If the user community cares enough about interoperability between
swift-like things they will demand an API spec and conformance tests and
someone will write those and then radosgw will have something to conform
to. None of this has anything to do with the governance model for Ceph
though.

-Ben Swartzlander



> Thanks,
> Kevin
> 
> From: Ben Swartzlander [b...@swartzlander.org]
> Sent: Thursday, August 04, 2016 10:21 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [tc] persistently single-vendor projects
>
> On 08/04/2016 11:57 AM, Fox, Kevin M wrote:
>> Ok. I'll play devils advocate here and speak to the other side of this, 
>> because you raised an interesting issue...
>>
>> Ceph is outside of the tent. It provides a (mostly) api compatible 
>> implementation of the swift api (radosgw), and it is commonly used in 
>> OpenStack deployments.
>>
>> Other OpenStack projects don't take it into account because its not a big 
>> tent thing, even though it is very common. Because of some rules about only 
>> testing OpenStack things, radosgw is not tested against even though it is so 
>> common.
>
> I call BS on this assertion. We test things that outside the tent in the
> upstream gate all the time -- the only requirement is that they be
> released. We won't test against unreleased stuff that's outside the big
> tent and the reason for that should be obvious.
>
>> This causes odd breakages at times that could easily be prevented, but for 
>> procedural things around the Big Tent.
>
> The only way I can see for "odd breakages" to sneak in is on the Ceph
> side, if they aren't testing their changes against OpenStack and they
> introduce a regression, then that's their fault (assuming of course that
> we have good test coverage running against the latest stable release of
> Ceph). It's reasonable to request that we increase our test coverage
> with Ceph if it's not good enough and if we are the ones causing the
> breakages. But their outside status isn't the problem.
>
> -Ben Swartzlander
>
>
>> I do think this should be fixed before we advocate single vendor projects 
>> exit the big tent after some time. As the testing situation may be made 
>> worse.
>>
>> Thanks,
>> Kevin
>> 
>> From: Thierry Carrez 

Re: [openstack-dev] [Neutron] stable/liberty busted

2016-08-04 Thread Brian Haley

On 08/04/2016 03:25 PM, Brian Haley wrote:

On 08/04/2016 03:16 PM, Rick Jones wrote:

On 08/04/2016 12:04 PM, Kevin Benton wrote:

Yeah, I wasn't thinking when I +2'ed that. There are two use cases for
the pinger, one for ensuring continuous connectivity and one for
eventual connectivity.

I think the revert is okay for a quick fix, but we really need a new
argument to the pinger for strictness to decide which behavior the test
is looking for.


What situation requires continuous connectivity?


Maybe the test names can answer that:

test_assert_pings_during_br_int_setup_not_lost()
_test_assert_pings_during_br_phys_setup_not_lost()

In other cases we want the previous behavior - is that IP alive?  It's probably
just best to put the old code back and make a new assert_async_ping() based on
this code.


https://review.openstack.org/351356

^^ that makes a new assert_async_ping() and restores assert_ping() to previous 
behavior.


It will need a few rechecks to verify it helps, although this error would be 
hard to trigger.


-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Abandoning specs without recent updates

2016-08-04 Thread Jay Faulkner
Hi all,

I'd like to abandon any ironic-specs reviews that haven't had any updates in 6 
months or more. This works out to about 27 patches. The primary reason for this 
is to get items out of the review queue that are old and stale.

I'll be performing this action next week unless there's objections posted here.

Thanks,
Jay Faulkner
OSIC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Ben Swartzlander

On 08/04/2016 03:02 PM, Fox, Kevin M wrote:

Nope. The incompatibility was for things that never were in radosgw, not things 
that regressed over time. tmpurls differences and the namespacing things were 
there since the beginning first introduced.

At the last summit, I started with the DefCore folks and worked backwards until 
someone said, no we won't ever add tests for compatibility for that because 
radosgw is not an OpenStack project and we only test OpenStack.

Yes, I think thats a terrible thing. I'm just relaying the message I got.


I don't see how this is terrible at all. If someone were to start up a 
clone of another OpenStack project (say, Cinder) which aimed for 100% 
API compatibility with Cinder, but outside the tent, and then they 
somehow failed to achieve true compatibility because of Cinder's 
undocumented details, nobody would proclaim that the this was somehow 
our (the OpenStack community's) fault.


I think the Radosgw people probably have a legitimate beef with the 
Swift team about the lack of an official API spec that they can code do, 
but that's a choice for the Swift community to make. If users of Swift 
are satisfied with a the-code-is-the-spec stance then I say good luck to 
them.


If the user community cares enough about interoperability between 
swift-like things they will demand an API spec and conformance tests and 
someone will write those and then radosgw will have something to conform 
to. None of this has anything to do with the governance model for Ceph 
though.


-Ben Swartzlander




Thanks,
Kevin

From: Ben Swartzlander [b...@swartzlander.org]
Sent: Thursday, August 04, 2016 10:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] persistently single-vendor projects

On 08/04/2016 11:57 AM, Fox, Kevin M wrote:

Ok. I'll play devils advocate here and speak to the other side of this, because 
you raised an interesting issue...

Ceph is outside of the tent. It provides a (mostly) api compatible 
implementation of the swift api (radosgw), and it is commonly used in OpenStack 
deployments.

Other OpenStack projects don't take it into account because its not a big tent 
thing, even though it is very common. Because of some rules about only testing 
OpenStack things, radosgw is not tested against even though it is so common.


I call BS on this assertion. We test things that outside the tent in the
upstream gate all the time -- the only requirement is that they be
released. We won't test against unreleased stuff that's outside the big
tent and the reason for that should be obvious.


This causes odd breakages at times that could easily be prevented, but for 
procedural things around the Big Tent.


The only way I can see for "odd breakages" to sneak in is on the Ceph
side, if they aren't testing their changes against OpenStack and they
introduce a regression, then that's their fault (assuming of course that
we have good test coverage running against the latest stable release of
Ceph). It's reasonable to request that we increase our test coverage
with Ceph if it's not good enough and if we are the ones causing the
breakages. But their outside status isn't the problem.

-Ben Swartzlander



I do think this should be fixed before we advocate single vendor projects exit 
the big tent after some time. As the testing situation may be made worse.

Thanks,
Kevin

From: Thierry Carrez [thie...@openstack.org]
Sent: Thursday, August 04, 2016 5:59 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] persistently single-vendor projects

Thomas Goirand wrote:

On 08/01/2016 09:39 AM, Thierry Carrez wrote:

But if a project is persistently single-vendor after some time and
nobody seems interested to join it, the technical value of that project
being "in" OpenStack rather than a separate project in the OpenStack
ecosystem of projects is limited. It's limited for OpenStack (why
provide resources to support a project that is obviously only beneficial
to one organization ?), and it's limited to the organization itself (why
go through the OpenStack-specific open processes when you could shortcut
it with internal tools and meetings ? why accept the oversight of the
Technical Committee ?).


A project can still be useful for everyone with a single vendor
contributing to it, even after a long period of existence. IMO that's
not the issue we're trying to solve.


I agree with that -- open source projects can be useful for everyone
even if only a single vendor contributes to it.

But you seem to imply that the only way an open source project can be
useful is if it's developed as an OpenStack project under the OpenStack
Technical Committee governance. I'm not advocating that these projects
should stop or disappear. I'm just saying that if they are very unlikely
to grow a more diverse affiliation in the future, they derive little
value in 

Re: [openstack-dev] [Neutron] stable/liberty busted

2016-08-04 Thread Brian Haley

On 08/04/2016 03:16 PM, Rick Jones wrote:

On 08/04/2016 12:04 PM, Kevin Benton wrote:

Yeah, I wasn't thinking when I +2'ed that. There are two use cases for
the pinger, one for ensuring continuous connectivity and one for
eventual connectivity.

I think the revert is okay for a quick fix, but we really need a new
argument to the pinger for strictness to decide which behavior the test
is looking for.


What situation requires continuous connectivity?


Maybe the test names can answer that:

test_assert_pings_during_br_int_setup_not_lost()
_test_assert_pings_during_br_phys_setup_not_lost()

In other cases we want the previous behavior - is that IP alive?  It's probably 
just best to put the old code back and make a new assert_async_ping() based on 
this code.


-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-04 Thread Fox, Kevin M
I disagree. I see glare as a superset of the needs of the image api and one 
feature I need thats image related was specifically shot down as "the artefact 
api will solve that".

You have all the same needs to version/catalog/store images. They are not more 
special then a versioned/cataloged/stored heat templates, murano apps, tuskar 
workflows, etc. I've heard multiple times, members of the glance team saying  
that once glare is fully mature, they could stub out the v1/v2 glance apis on 
top of glare. What is the benefit to splitting if the end goal is to 
recombine/make one project irrelevant?

This feels like to me, another case of an established, original tent project 
not wanting to deal with something that needs to be dealt with, and instead 
pushing it out to another project with the hope that it just goes away. With 
all the traction non original tent projects have gotten since the big tent was 
established, that might be an accurate conclusion, but really bad for 
users/operators of OpenStack. 

I really would like glance/glare to reconsider this stance. OpenStack 
continuously budding off projects is not a good pattern.

Thanks,
Kevin

From: Erno Kuvaja [ekuv...@redhat.com]
Sent: Thursday, August 04, 2016 10:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] 
Glare as a new Project

On Thu, Aug 4, 2016 at 5:20 PM, Clint Byrum  wrote:
> Excerpts from Tim Bell's message of 2016-08-04 15:55:48 +:
>>
>> On 04 Aug 2016, at 17:27, Mikhail Fedosin 
>> > wrote:
>> >
>> > Hi all,
>> > > > after 6 months of Glare v1 API development we have decided to continue
>> > our work in a separate project in the "openstack" namespace with its own
>> > core team (me, Kairat Kushaev, Darja Shkhray and the original creator -
>> > Alexander Tivelkov). We want to thank Glance community for their support
>> > during the incubation period, valuable advice and suggestions - this time
>> > was really productive for us. I believe that this step will allow the
>> > Glare project to concentrate on feature development and move forward
>> > faster. Having the independent service also removes inconsistencies
>> > in understanding what Glance project is: it seems that a single project
>> > cannot own two different APIs with partially overlapping functionality. So
>> > with the separation of Glare into a new project, Glance may continue its
>> > work on the OpenStack Images API, while Glare will become the reference
>> > implementation of the new OpenStack Artifacts API.
>> >
>>
>> I would suggest looking at more than just the development process when
>> reflecting on this choice.
>> While it may allow more rapid development, doing on your own will increase
>> costs for end users and operators in areas like packaging, configuration,
>> monitoring, quota … gaining critical mass in production for Glare will
>> be much more difficult if you are not building on the Glance install base.
>
> I have to agree with Tim here. I respect that it's difficult to build on
> top of Glance's API, rather than just start fresh. But, for operators,
> it's more services, more API's to audit, and more complexity. For users,
> they'll now have two ways to upload software to their clouds, which is
> likely to result in a large portion just ignoring Glare even when it
> would be useful for them.
>
> What I'd hoped when Glare and Glance combined, was that there would be
> a single API that could be used for any software upload and listing. Is
> there any kind of retrospective or documentation somewhere that explains
> why that wasn't possible?
>

I was planning to leave this branch on it's own, but I have to correct
something here. This split is not introducing new API, it's moving the
new Artifact API under it's own project, there was no shared API in
first place. Glare was to be it's own service already within Glance
project. Also the Artifacts API turned out to be fundamentally
incompatible with the Images APIs v1 & v2 due to the totally different
requirements. And even the option was discussed in the community I
personally think replicating Images API and carrying the cost it being
in two services that are fundamentally different would have been huge
mistake we would have paid for long time. I'm not saying that it would
have been impossible, but there is lots of burden in Images APIs that
Glare really does not need to carry, we just can't get rid of it and
likely no-one would have been happy to see Images API v3 around the
time when we are working super hard to get the v1 users moving to v2.

Packaging glance-api, glance-registry and glare-api from glance repo
would not change the effort too much compared from 2 repos either.
Likely it just makes it easier when the logical split it clear from
the beginning.

What comes to Tim's statement, I do 

Re: [openstack-dev] [Neutron] stable/liberty busted

2016-08-04 Thread Rick Jones

On 08/04/2016 12:04 PM, Kevin Benton wrote:

Yeah, I wasn't thinking when I +2'ed that. There are two use cases for
the pinger, one for ensuring continuous connectivity and one for
eventual connectivity.

I think the revert is okay for a quick fix, but we really need a new
argument to the pinger for strictness to decide which behavior the test
is looking for.


What situation requires continuous connectivity?

rick jones


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][glance] glance_store 0.16.0 release (newton)

2016-08-04 Thread no-reply
We are jubilant to announce the release of:

glance_store 0.16.0: OpenStack Image Service Store Library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/glance_store

With package available at:

https://pypi.python.org/pypi/glance_store

Please report issues through launchpad:

http://bugs.launchpad.net/glance-store

For more details, please see below.

0.16.0
^^

glance_store._drivers.s3 removed from tree.


Upgrade Notes
*

* The S3 driver has been removed completely from the glance_store
  source tree. All environments running and (or) using this s3-driver
  piece of code and have not been migrated will stop working after the
  upgrade. We recommend you use a different storage backend that is
  still being supported by Glance. The standard deprecation path has
  been used to remove this. The proces requiring store driver
  maintainers was initiated at http://lists.openstack.org/pipermail
  /openstack-dev/2015-December/081966.html . Since, S3 driver did not
  get any maintainer, it was decided to remove it.

Changes in glance_store 0.15.0..0.16.0
--

ad2c5ef Updated from global requirements
3fe4e00 Updated from global requirements
4432e60 Remove S3 driver
0528e7d Sheepdog:modify default addr
830211c Don't include openstack/common in flake8 exclude list
57cea8d Split functional tests apart


Diffstat (except docs and test files)
-

glance_store/_drivers/s3.py| 832 -
glance_store/_drivers/sheepdog.py  |   2 +-
glance_store/backend.py|   2 +-
glance_store/location.py   |   2 -
.../filesystem/test_functional_filesystem.py   |  44 ++
.../functional/swift/test_functional_swift.py  |  92 +++
.../notes/remove-s3-driver-f432afa1f53ecdf8.yaml   |  15 +
requirements.txt   |   2 +-
setup.cfg  |   8 +-
tox.ini|  15 +-
18 files changed, 173 insertions(+), 1665 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 93ed69a..e4f1b3f 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ oslo.serialization>=1.10.0 # Apache-2.0
-oslo.utils>=3.15.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] stable/liberty busted

2016-08-04 Thread Kevin Benton
Yeah, I wasn't thinking when I +2'ed that. There are two use cases for the
pinger, one for ensuring continuous connectivity and one for eventual
connectivity.

I think the revert is okay for a quick fix, but we really need a new
argument to the pinger for strictness to decide which behavior the test is
looking for.

On Aug 4, 2016 11:49 AM, "Brian Haley"  wrote:

> On 08/04/2016 01:36 PM, Armando M. wrote:
>
>> Hi Neutrinos,
>>
>> I have noticed that Liberty seems to be belly up [1]. I wonder if anyone
>> knows
>> anything or has the time to look into it.
>>
>> Many thanks,
>> Armando
>>
>> [1] https://review.openstack.org/#/c/349039/
>>
>
> This could be due to this backport;
>
> https://review.openstack.org/#/c/347062/
>
> Before we were doing 'ping -c 3 -W 1 $IP', which will succeed as long as
> one packet is returned.
>
> Now there is an outer loop that runs 'ping -c 1 -W 1 $IP', so a single
> dropped packet could cause an error.  Since sometimes that first packet
> causes ARP to happen, any delay near the 1-second mark looks like a lost
> packet, but is really just transient and packets 2 and 3 are fine.
>
> I've started a revert and will recheck, but if I'm right an async issue
> like this is hard to reliably reproduce - I had to use iptables directly to
> test my theory about the return code from ping when 1/3 packets were lost.
>
> -Brian
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-04 Thread Erno Kuvaja
On Thu, Aug 4, 2016 at 7:13 PM, Tim Bell  wrote:
>
>> On 04 Aug 2016, at 19:34, Erno Kuvaja  wrote:
>>
>> On Thu, Aug 4, 2016 at 5:20 PM, Clint Byrum  wrote:
>>> Excerpts from Tim Bell's message of 2016-08-04 15:55:48 +:

 On 04 Aug 2016, at 17:27, Mikhail Fedosin 
 > wrote:
>
> Hi all,
>>> after 6 months of Glare v1 API development we have decided to continue
> our work in a separate project in the "openstack" namespace with its own
> core team (me, Kairat Kushaev, Darja Shkhray and the original creator -
> Alexander Tivelkov). We want to thank Glance community for their support
> during the incubation period, valuable advice and suggestions - this time
> was really productive for us. I believe that this step will allow the
> Glare project to concentrate on feature development and move forward
> faster. Having the independent service also removes inconsistencies
> in understanding what Glance project is: it seems that a single project
> cannot own two different APIs with partially overlapping functionality. So
> with the separation of Glare into a new project, Glance may continue its
> work on the OpenStack Images API, while Glare will become the reference
> implementation of the new OpenStack Artifacts API.
>

 I would suggest looking at more than just the development process when
 reflecting on this choice.
 While it may allow more rapid development, doing on your own will increase
 costs for end users and operators in areas like packaging, configuration,
 monitoring, quota … gaining critical mass in production for Glare will
 be much more difficult if you are not building on the Glance install base.
>>>
>>> I have to agree with Tim here. I respect that it's difficult to build on
>>> top of Glance's API, rather than just start fresh. But, for operators,
>>> it's more services, more API's to audit, and more complexity. For users,
>>> they'll now have two ways to upload software to their clouds, which is
>>> likely to result in a large portion just ignoring Glare even when it
>>> would be useful for them.
>>>
>>> What I'd hoped when Glare and Glance combined, was that there would be
>>> a single API that could be used for any software upload and listing. Is
>>> there any kind of retrospective or documentation somewhere that explains
>>> why that wasn't possible?
>>>
>>
>> I was planning to leave this branch on it's own, but I have to correct
>> something here. This split is not introducing new API, it's moving the
>> new Artifact API under it's own project, there was no shared API in
>> first place. Glare was to be it's own service already within Glance
>> project. Also the Artifacts API turned out to be fundamentally
>> incompatible with the Images APIs v1 & v2 due to the totally different
>> requirements. And even the option was discussed in the community I
>> personally think replicating Images API and carrying the cost it being
>> in two services that are fundamentally different would have been huge
>> mistake we would have paid for long time. I'm not saying that it would
>> have been impossible, but there is lots of burden in Images APIs that
>> Glare really does not need to carry, we just can't get rid of it and
>> likely no-one would have been happy to see Images API v3 around the
>> time when we are working super hard to get the v1 users moving to v2.
>>
>> Packaging glance-api, glance-registry and glare-api from glance repo
>> would not change the effort too much compared from 2 repos either.
>> Likely it just makes it easier when the logical split it clear from
>> the beginning.
>>
>> What comes to Tim's statement, I do not see how Glare in it's own
>> service with it's own API could ride on the Glance install base apart
>> from the quite false mental image these two thing being the same and
>> based on the same code.
>>
>
> To give a concrete use case, CERN have Glance deployed for images.  We are 
> interested in the ecosystem
> around Murano and are actively using Heat.  We deploy using RDO with RPM 
> packages, Puppet-OpenStack
> for configuration, a set of machines serving Glance in an HA set up across 
> multiple data centres  and various open source monitoring tools.
>
> The multitude of projects and the day two maintenance scenarios with 11 
> independent projects is a cost and adding further to this cost for the 
> production deployments of OpenStack should not be ignored.
>
> By Glare choosing to go their own way, does this mean that

Let me give you concrete answers. I will put the answer that differs
if Glare would stay as glance service in () otherwise the answer
applies to both cases.
>
> - Can the existing RPM packaging for Glance be used to deploy Glare ? If 
> there needs to be new packages defined, this is additional cost for the RDO 
> team (and the equivalent 

Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Fox, Kevin M
Nope. The incompatibility was for things that never were in radosgw, not things 
that regressed over time. tmpurls differences and the namespacing things were 
there since the beginning first introduced.

At the last summit, I started with the DefCore folks and worked backwards until 
someone said, no we won't ever add tests for compatibility for that because 
radosgw is not an OpenStack project and we only test OpenStack.

Yes, I think thats a terrible thing. I'm just relaying the message I got.

Thanks,
Kevin

From: Ben Swartzlander [b...@swartzlander.org]
Sent: Thursday, August 04, 2016 10:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] persistently single-vendor projects

On 08/04/2016 11:57 AM, Fox, Kevin M wrote:
> Ok. I'll play devils advocate here and speak to the other side of this, 
> because you raised an interesting issue...
>
> Ceph is outside of the tent. It provides a (mostly) api compatible 
> implementation of the swift api (radosgw), and it is commonly used in 
> OpenStack deployments.
>
> Other OpenStack projects don't take it into account because its not a big 
> tent thing, even though it is very common. Because of some rules about only 
> testing OpenStack things, radosgw is not tested against even though it is so 
> common.

I call BS on this assertion. We test things that outside the tent in the
upstream gate all the time -- the only requirement is that they be
released. We won't test against unreleased stuff that's outside the big
tent and the reason for that should be obvious.

> This causes odd breakages at times that could easily be prevented, but for 
> procedural things around the Big Tent.

The only way I can see for "odd breakages" to sneak in is on the Ceph
side, if they aren't testing their changes against OpenStack and they
introduce a regression, then that's their fault (assuming of course that
we have good test coverage running against the latest stable release of
Ceph). It's reasonable to request that we increase our test coverage
with Ceph if it's not good enough and if we are the ones causing the
breakages. But their outside status isn't the problem.

-Ben Swartzlander


> I do think this should be fixed before we advocate single vendor projects 
> exit the big tent after some time. As the testing situation may be made worse.
>
> Thanks,
> Kevin
> 
> From: Thierry Carrez [thie...@openstack.org]
> Sent: Thursday, August 04, 2016 5:59 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [tc] persistently single-vendor projects
>
> Thomas Goirand wrote:
>> On 08/01/2016 09:39 AM, Thierry Carrez wrote:
>>> But if a project is persistently single-vendor after some time and
>>> nobody seems interested to join it, the technical value of that project
>>> being "in" OpenStack rather than a separate project in the OpenStack
>>> ecosystem of projects is limited. It's limited for OpenStack (why
>>> provide resources to support a project that is obviously only beneficial
>>> to one organization ?), and it's limited to the organization itself (why
>>> go through the OpenStack-specific open processes when you could shortcut
>>> it with internal tools and meetings ? why accept the oversight of the
>>> Technical Committee ?).
>>
>> A project can still be useful for everyone with a single vendor
>> contributing to it, even after a long period of existence. IMO that's
>> not the issue we're trying to solve.
>
> I agree with that -- open source projects can be useful for everyone
> even if only a single vendor contributes to it.
>
> But you seem to imply that the only way an open source project can be
> useful is if it's developed as an OpenStack project under the OpenStack
> Technical Committee governance. I'm not advocating that these projects
> should stop or disappear. I'm just saying that if they are very unlikely
> to grow a more diverse affiliation in the future, they derive little
> value in being developed under the OpenStack Technical Committee
> oversight, and would probably be equally useful if developed outside of
> OpenStack official projects governance. There are plenty of projects
> that are useful to OpenStack that are not developed under the TC
> governance (libvirt, Ceph, OpenvSwitch...)
>
> What is the point for a project to submit themselves to the oversight of
> a multi-organization Technical Committee if they always will be the
> result of the efforts of a single organization ?
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not 

Re: [openstack-dev] [Neutron] stable/liberty busted

2016-08-04 Thread Armando M.
On 4 August 2016 at 11:45, Brian Haley  wrote:

> On 08/04/2016 01:36 PM, Armando M. wrote:
>
>> Hi Neutrinos,
>>
>> I have noticed that Liberty seems to be belly up [1]. I wonder if anyone
>> knows
>> anything or has the time to look into it.
>>
>> Many thanks,
>> Armando
>>
>> [1] https://review.openstack.org/#/c/349039/
>>
>
> This could be due to this backport;
>
> https://review.openstack.org/#/c/347062/
>
> Before we were doing 'ping -c 3 -W 1 $IP', which will succeed as long as
> one packet is returned.
>
> Now there is an outer loop that runs 'ping -c 1 -W 1 $IP', so a single
> dropped packet could cause an error.  Since sometimes that first packet
> causes ARP to happen, any delay near the 1-second mark looks like a lost
> packet, but is really just transient and packets 2 and 3 are fine.
>
> I've started a revert and will recheck, but if I'm right an async issue
> like this is hard to reliably reproduce - I had to use iptables directly to
> test my theory about the return code from ping when 1/3 packets were lost.
>
> Thanks Brian,

I'll eagerly await results!


> -Brian
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Ian Cordasco
 

-Original Message-
From: Fox, Kevin M 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: August 4, 2016 at 13:40:53
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [tc] persistently single-vendor projects

> Sorry, I was a bit unclear here. I meant the radosgw in particular. I've seen 
> multiple  
> OpenStack projects fail to integrate with it.
>  
> The most recent example I can think of is Trove can't do database backups to 
> it as the namespacing  
> is slightly different in the older radosgw versions. (I think this is made 
> more uniform  
> in Jewel, but I haven't tested it). I know tempurl's work slightly 
> differently too so  
> may affect services that work with them.
>  
> I don't think the differences are really intentional, more that there isn't 
> an official  
> swift api spec and no cross testing is done because the OpenStack api test 
> suite doesn't  
> run against radosgw as its not in the tent.

And yet there's nothing stopping the radosgw developers (who are apparently 
aiming for Swift compatibility) from running those tests themselves in their 
testing infrastructure. The tests are open source even if there's no written 
specification for the API.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-04 Thread Ian Cordasco
 

-Original Message-
From: Tim Bell 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: August 4, 2016 at 13:19:02
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] 
Glare as a new Project

>  
> > On 04 Aug 2016, at 19:34, Erno Kuvaja wrote:
> >
> > On Thu, Aug 4, 2016 at 5:20 PM, Clint Byrum wrote:
> >> Excerpts from Tim Bell's message of 2016-08-04 15:55:48 +:
> >>>
> >>> On 04 Aug 2016, at 17:27, Mikhail Fedosin >  
> wrote:
> 
>  Hi all,
> >> after 6 months of Glare v1 API development we have decided to continue
>  our work in a separate project in the "openstack" namespace with its own
>  core team (me, Kairat Kushaev, Darja Shkhray and the original creator -
>  Alexander Tivelkov). We want to thank Glance community for their support
>  during the incubation period, valuable advice and suggestions - this time
>  was really productive for us. I believe that this step will allow the
>  Glare project to concentrate on feature development and move forward
>  faster. Having the independent service also removes inconsistencies
>  in understanding what Glance project is: it seems that a single project
>  cannot own two different APIs with partially overlapping functionality. 
>  So
>  with the separation of Glare into a new project, Glance may continue its
>  work on the OpenStack Images API, while Glare will become the reference
>  implementation of the new OpenStack Artifacts API.
> 
> >>>
> >>> I would suggest looking at more than just the development process when
> >>> reflecting on this choice.
> >>> While it may allow more rapid development, doing on your own will increase
> >>> costs for end users and operators in areas like packaging, configuration,
> >>> monitoring, quota … gaining critical mass in production for Glare will
> >>> be much more difficult if you are not building on the Glance install base.
> >>
> >> I have to agree with Tim here. I respect that it's difficult to build on
> >> top of Glance's API, rather than just start fresh. But, for operators,
> >> it's more services, more API's to audit, and more complexity. For users,
> >> they'll now have two ways to upload software to their clouds, which is
> >> likely to result in a large portion just ignoring Glare even when it
> >> would be useful for them.
> >>
> >> What I'd hoped when Glare and Glance combined, was that there would be
> >> a single API that could be used for any software upload and listing. Is
> >> there any kind of retrospective or documentation somewhere that explains
> >> why that wasn't possible?
> >>
> >
> > I was planning to leave this branch on it's own, but I have to correct
> > something here. This split is not introducing new API, it's moving the
> > new Artifact API under it's own project, there was no shared API in
> > first place. Glare was to be it's own service already within Glance
> > project. Also the Artifacts API turned out to be fundamentally
> > incompatible with the Images APIs v1 & v2 due to the totally different
> > requirements. And even the option was discussed in the community I
> > personally think replicating Images API and carrying the cost it being
> > in two services that are fundamentally different would have been huge
> > mistake we would have paid for long time. I'm not saying that it would
> > have been impossible, but there is lots of burden in Images APIs that
> > Glare really does not need to carry, we just can't get rid of it and
> > likely no-one would have been happy to see Images API v3 around the
> > time when we are working super hard to get the v1 users moving to v2.
> >
> > Packaging glance-api, glance-registry and glare-api from glance repo
> > would not change the effort too much compared from 2 repos either.
> > Likely it just makes it easier when the logical split it clear from
> > the beginning.
> >
> > What comes to Tim's statement, I do not see how Glare in it's own
> > service with it's own API could ride on the Glance install base apart
> > from the quite false mental image these two thing being the same and
> > based on the same code.
> >
>  
> To give a concrete use case, CERN have Glance deployed for images. We are 
> interested in  
> the ecosystem
> around Murano and are actively using Heat. We deploy using RDO with RPM 
> packages, Puppet-OpenStack  
> for configuration, a set of machines serving Glance in an HA set up across 
> multiple data  
> centres and various open source monitoring tools.
>  
> The multitude of projects and the day two maintenance scenarios with 11 
> independent  
> projects is a cost and adding further to this cost for the production 
> deployments of OpenStack  
> should not be ignored.
>  
> By Glare choosing to go their own way, does this mean that

Re: [openstack-dev] [Neutron] stable/liberty busted

2016-08-04 Thread Brian Haley

On 08/04/2016 01:36 PM, Armando M. wrote:

Hi Neutrinos,

I have noticed that Liberty seems to be belly up [1]. I wonder if anyone knows
anything or has the time to look into it.

Many thanks,
Armando

[1] https://review.openstack.org/#/c/349039/


This could be due to this backport;

https://review.openstack.org/#/c/347062/

Before we were doing 'ping -c 3 -W 1 $IP', which will succeed as long as one 
packet is returned.


Now there is an outer loop that runs 'ping -c 1 -W 1 $IP', so a single dropped 
packet could cause an error.  Since sometimes that first packet causes ARP to 
happen, any delay near the 1-second mark looks like a lost packet, but is really 
just transient and packets 2 and 3 are fine.


I've started a revert and will recheck, but if I'm right an async issue like 
this is hard to reliably reproduce - I had to use iptables directly to test my 
theory about the return code from ping when 1/3 packets were lost.


-Brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Fox, Kevin M
Sorry, I was a bit unclear here. I meant the radosgw in particular. I've seen 
multiple OpenStack projects fail to integrate with it.

The most recent example I can think of is Trove can't do database backups to it 
as the namespacing is slightly different in the older radosgw versions. (I 
think this is made more uniform in Jewel, but I haven't tested it). I know 
tempurl's work slightly differently too so may affect services that work with 
them.

I don't think the differences are really intentional, more that there isn't an 
official swift api spec and no cross testing is done because the OpenStack api 
test suite doesn't run against radosgw as its not in the tent.

Thanks,
Kevin

From: Erno Kuvaja [ekuv...@redhat.com]
Sent: Thursday, August 04, 2016 9:52 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] persistently single-vendor projects

Kevin,

What do you mean by "Other OpenStack projects don't take it into
account because its not a big tent thing"? I think there is pretty
decent adoption of Ceph across the projects where it would make sense.
Also I doubt none of them would be against 3rd party Ceph gates to
those project if the Ceph community felt that the testing is not
sufficient. We have for example Cinder being brilliant example of
demanding driver providers providing CI for their backends.

Would such demand for 3rd party CI across OpenStack rather than just
Cinder answer your concerns of the testing and how far we are willing
to take that?

- Erno

On Thu, Aug 4, 2016 at 4:57 PM, Fox, Kevin M  wrote:
> Ok. I'll play devils advocate here and speak to the other side of this, 
> because you raised an interesting issue...
>
> Ceph is outside of the tent. It provides a (mostly) api compatible 
> implementation of the swift api (radosgw), and it is commonly used in 
> OpenStack deployments.
>
> Other OpenStack projects don't take it into account because its not a big 
> tent thing, even though it is very common. Because of some rules about only 
> testing OpenStack things, radosgw is not tested against even though it is so 
> common. This causes odd breakages at times that could easily be prevented, 
> but for procedural things around the Big Tent.
>
> I do think this should be fixed before we advocate single vendor projects 
> exit the big tent after some time. As the testing situation may be made worse.
>
> Thanks,
> Kevin
> 
> From: Thierry Carrez [thie...@openstack.org]
> Sent: Thursday, August 04, 2016 5:59 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [tc] persistently single-vendor projects
>
> Thomas Goirand wrote:
>> On 08/01/2016 09:39 AM, Thierry Carrez wrote:
>>> But if a project is persistently single-vendor after some time and
>>> nobody seems interested to join it, the technical value of that project
>>> being "in" OpenStack rather than a separate project in the OpenStack
>>> ecosystem of projects is limited. It's limited for OpenStack (why
>>> provide resources to support a project that is obviously only beneficial
>>> to one organization ?), and it's limited to the organization itself (why
>>> go through the OpenStack-specific open processes when you could shortcut
>>> it with internal tools and meetings ? why accept the oversight of the
>>> Technical Committee ?).
>>
>> A project can still be useful for everyone with a single vendor
>> contributing to it, even after a long period of existence. IMO that's
>> not the issue we're trying to solve.
>
> I agree with that -- open source projects can be useful for everyone
> even if only a single vendor contributes to it.
>
> But you seem to imply that the only way an open source project can be
> useful is if it's developed as an OpenStack project under the OpenStack
> Technical Committee governance. I'm not advocating that these projects
> should stop or disappear. I'm just saying that if they are very unlikely
> to grow a more diverse affiliation in the future, they derive little
> value in being developed under the OpenStack Technical Committee
> oversight, and would probably be equally useful if developed outside of
> OpenStack official projects governance. There are plenty of projects
> that are useful to OpenStack that are not developed under the TC
> governance (libvirt, Ceph, OpenvSwitch...)
>
> What is the point for a project to submit themselves to the oversight of
> a multi-organization Technical Committee if they always will be the
> result of the efforts of a single organization ?
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> 

Re: [openstack-dev] [Nova] Some thoughts on API microversions

2016-08-04 Thread Sean Dague
On 08/04/2016 12:47 PM, John Garbutt wrote:
> On 4 August 2016 at 14:18, Andrew Laski  wrote:
>> On Thu, Aug 4, 2016, at 08:20 AM, Sean Dague wrote:
>>> On 08/03/2016 08:54 PM, Andrew Laski wrote:
 I've brought some of these thoughts up a few times in conversations
 where the Nova team is trying to decide if a particular change warrants
 a microversion. I'm sure I've annoyed some people by this point because
 it wasn't germane to those discussions. So I'll lay this out in it's own
 thread.

 I am a fan of microversions. I think they work wonderfully to express
 when a resource representation changes, or when different data is
 required in a request. This allows clients to make the same request
 across multiple clouds and expect the exact same response format,
 assuming those clouds support that particular microversion. I also think
 they work well to express that a new resource is available. However I do
 think think they have some shortcomings in expressing that a resource
 has been removed. But in short I think microversions work great for
 expressing that there have been changes to the structure and format of
 the API.

 I think microversions are being overused as a signal for other types of
 changes in the API because they are the only tool we have available. The
 most recent example is a proposal to allow the revert_resize API call to
 work when a resizing instance ends up in an error state. I consider
 microversions to be problematic for changes like that because we end up
 in one of two situations:

 1. The microversion is a signal that the API now supports this action,
 but users can perform the action at any microversion. What this really
 indicates is that the deployment being queried has upgraded to a certain
 point and has a new capability. The structure and format of the API have
 not changed so an API microversion is the wrong tool here. And the
 expected use of a microversion, in my opinion, is to demarcate that the
 API is now different at this particular point.

 2. The microversion is a signal that the API now supports this action,
 and users are restricted to using it only on or after that microversion.
 In many cases this is an artificial constraint placed just to satisfy
 the expectation that the API does not change before the microversion.
 But the reality is that if the API change was exposed to every
 microversion it does not affect the ability I lauded above of a client
 being able to send the same request and receive the same response from
 disparate clouds. In other words exposing the new action for all
 microversions does not affect the interoperability story of Nova which
 is the real use case for microversions. I do recognize that the
 situation may be more nuanced and constraining the action to specific
 microversions may be necessary, but that's not always true.

 In case 1 above I think we could find a better way to do this. And I
 don't think we should do case 2, though there may be special cases that
 warrant it.

 As possible alternate signalling methods I would like to propose the
 following for consideration:

 Exposing capabilities that a user is allowed to use. This has been
 discussed before and there is general agreement that this is something
 we would like in Nova. Capabilities will programatically inform users
 that a new action has been added or an existing action can be performed
 in more cases, like revert_resize. With that in place we can avoid the
 ambiguous use of microversions to do that. In the meantime I would like
 the team to consider not using microversions for this case. We have
 enough of them being added that I think for now we could just wait for
 the next microversion after a capability is added and document the new
 capability there.
>>>
>>> The problem with this approach is that the capability add isn't on a
>>> microversion boundary, as long as we continue to believe that we want to
>>> support CD deployments this means people can deploy code with the
>>> behavior change, that's not documented or signaled any way.
> 
> +1
> 
> I do wonder if we want to relax our support of CD, to some extent, but
> thats a different thread.
> 
>> The fact that the capability add isn't on a microversion boundary is
>> exactly my point. There's no need for it to be in many cases. But it
>> would only apply for capability adds which don't affect the
>> interoperability of multiple deployments.
>>
>> The signaling would come from the ability to query the capabilities
>> listing. A change in what that listing returns indicates a behavior
>> change.
>>
>> Another reason I like the above mechanism is that it handles differences
>> in policy better as well. As much as we say that two clouds with the
>> 

[openstack-dev] [all] devstack changing to neutron by default RSN

2016-08-04 Thread Sean Dague
One of the cycle goals in newton was to get devstack over to neutron by
default. Neutron is used by 90+% of our users, and nova network is
deprecated, and is not long for this world.

Because devstack is used by developers as well as by out test
infrastructure, the major stumbling block was coming up with a good
working default on a machine with only 1 interface, that doesn't leave
that interface in a totally broken state if you reboot the box (noting
that ovs changes are persistent by default, but brctl ones are not).

We think we've come up with a model that works. It's here -
https://review.openstack.org/#/c/350750/. And while it's surprisingly
short, it took a lot of thinking this one through to get there.

The crux of it is that we trust the value of PUBLIC_INTERFACE in a new
way on the neutron side. It is now unset by default (logic was changed
in n-net to keep things right there), and if set, then we assume you
really want neutron to manage this physical interface.

If not, that's cool. We're automatically creating a bridge interface
(with no physical interfaces in it) and managing that. For single node
testing this works fine. It passes all the tempest tests[1]. The only
thing that's really weird in this setup is that because there is no
physical interface in that bridge, there is no path to the outside world
from guests. That means no package updates on them.

We address that with an iptables masq rule. It's a little cheaty pants,
however of the options we've got, it didn't seem so bad. (Note: if you
have a better option and are willing to get knee deep in solving it,
please do so. More contributors the better.)

It's going to take a bit for docs to all roll over here, but I think we
actually want this out sooner rather than later to find any other edge
cases that it introduces. There will be some bumpiness here. However,
being able to bring up a full neutron with only the 4 passwords
specified in config is quite nice.

1. actually 5 tests fail for unrelated reasons, which is that tempest
isn't probably excluding tests for services that aren't running because
it makes some assumptions on the gate config. That will be fixed soon.

-Sean

-- 
Sean Dague
http://dague.net


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-04 Thread Fox, Kevin M
+1. This moves yet more work towards the operators. That should be taken into 
account.

Thanks,
Kevin

From: Tim Bell [tim.b...@cern.ch]
Sent: Thursday, August 04, 2016 8:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] 
Glare as a new Project


On 04 Aug 2016, at 17:27, Mikhail Fedosin 
> wrote:

Hi all,
after 6 months of Glare v1 API development we have decided to continue our work 
in a separate project in the "openstack" namespace with its own core team (me, 
Kairat Kushaev, Darja Shkhray and the original creator - Alexander Tivelkov). 
We want to thank Glance community for their support during the incubation 
period, valuable advice and suggestions - this time was really productive for 
us. I believe that this step will allow the Glare project to concentrate on 
feature development and move forward faster. Having the independent service 
also removes inconsistencies in understanding what Glance project is: it seems 
that a single project cannot own two different APIs with partially overlapping 
functionality. So with the separation of Glare into a new project, Glance may 
continue its work on the OpenStack Images API, while Glare will become the 
reference implementation of the new OpenStack Artifacts API.


I would suggest looking at more than just the development process when 
reflecting on this choice.
While it may allow more rapid development, doing on your own will increase 
costs for end users and operators in areas like packaging, configuration, 
monitoring, quota … gaining critical mass in production for Glare will be much 
more difficult if you are not building on the Glance install base.
Tim
Nevertheless, Glare team would like to continue to collaborate with the Glance 
team in a new - cross-project - format. We still have lots in common, both in 
code and usage scenarios, so we are looking forward for fruitful work with the 
rest of the Glance team. Those of you guys who are interested in Glare and the 
future of Artifacts API are also welcome to join the Glare team: we have a lot 
of really exciting tasks and will always welcome new members.
Meanwhile, despite the fact that my focus will be on the new project, I will 
continue to be part of the Glance team and for sure I'm going to contribute in 
Glance, because I am interested in this project and want to help it be 
successful.

We'll have the formal patches pushed to project-config earlier next week, 
appropriate repositories, wiki and launchpad space will be created soon as 
well.  Our regular weekly IRC meeting remains intact: it is 17:30 UTC Mondays 
in #openstack-meeting-alt, it will just become a Glare project meeting instead 
of a Glare sub-team meeting. Please feel free to join!

Best regards,
Mikhail Fedosin

P.S. For those of you who may be curious on the project name. We'll still be 
called "Glare", but since we are on our own now this acronym becomes recursive: 
GLARE now stands for "GLare Artifact REpository" :)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some thoughts on API microversions

2016-08-04 Thread Ed Leafe
On Aug 4, 2016, at 12:17 PM, Chris Friesen  wrote:

>> The idea that by specifying a distinct microversion would somehow guarantee
>> an immutable behavior, though, is simply not the case. We discussed this at
>> length at the midcycle regarding the dropping of the nova-network code; once
>> that's dropped, there won't be any way to get that behavior no matter what
>> microversion you specify. It's gone. We signal this with deprecation notices,
>> release notes, etc., and it's up to individuals to move away from using that
>> behavior during this deprecation period. A new microversion will never help
>> anyone who doesn't follow these signals.
> 
> I was unable to attend the midcycle, but that seems to violate the original 
> description of how microversions were supposed to work.  As I recall, the 
> original intent was something like this:
> 
> At time T, we remove an API via microversion X.  We keep the code around to 
> support it when using microversions less than X.
> 
> At some later time T+i, we bump the minimum microversion up to X.  At this 
> point nobody can ever request the older microversions, so we can safely 
> remove the server-side code.
> 
> Have we given up on this?  Or is nova-network a special-case?

Yes, because raising the minimum doesn’t just remove the deprecated thing (in 
this case, nova-network), but also removes the ability to use *all* of the 
older microversions. So in this case, we are creating a new microversion that 
simply signals “nova-network is no longer here”. It doesn’t act like the 
original intent on microversions (preservation of older APIs).

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-04 Thread Tim Bell

> On 04 Aug 2016, at 19:34, Erno Kuvaja  wrote:
> 
> On Thu, Aug 4, 2016 at 5:20 PM, Clint Byrum  wrote:
>> Excerpts from Tim Bell's message of 2016-08-04 15:55:48 +:
>>> 
>>> On 04 Aug 2016, at 17:27, Mikhail Fedosin 
>>> > wrote:
 
 Hi all,
>> after 6 months of Glare v1 API development we have decided to continue
 our work in a separate project in the "openstack" namespace with its own
 core team (me, Kairat Kushaev, Darja Shkhray and the original creator -
 Alexander Tivelkov). We want to thank Glance community for their support
 during the incubation period, valuable advice and suggestions - this time
 was really productive for us. I believe that this step will allow the
 Glare project to concentrate on feature development and move forward
 faster. Having the independent service also removes inconsistencies
 in understanding what Glance project is: it seems that a single project
 cannot own two different APIs with partially overlapping functionality. So
 with the separation of Glare into a new project, Glance may continue its
 work on the OpenStack Images API, while Glare will become the reference
 implementation of the new OpenStack Artifacts API.
 
>>> 
>>> I would suggest looking at more than just the development process when
>>> reflecting on this choice.
>>> While it may allow more rapid development, doing on your own will increase
>>> costs for end users and operators in areas like packaging, configuration,
>>> monitoring, quota … gaining critical mass in production for Glare will
>>> be much more difficult if you are not building on the Glance install base.
>> 
>> I have to agree with Tim here. I respect that it's difficult to build on
>> top of Glance's API, rather than just start fresh. But, for operators,
>> it's more services, more API's to audit, and more complexity. For users,
>> they'll now have two ways to upload software to their clouds, which is
>> likely to result in a large portion just ignoring Glare even when it
>> would be useful for them.
>> 
>> What I'd hoped when Glare and Glance combined, was that there would be
>> a single API that could be used for any software upload and listing. Is
>> there any kind of retrospective or documentation somewhere that explains
>> why that wasn't possible?
>> 
> 
> I was planning to leave this branch on it's own, but I have to correct
> something here. This split is not introducing new API, it's moving the
> new Artifact API under it's own project, there was no shared API in
> first place. Glare was to be it's own service already within Glance
> project. Also the Artifacts API turned out to be fundamentally
> incompatible with the Images APIs v1 & v2 due to the totally different
> requirements. And even the option was discussed in the community I
> personally think replicating Images API and carrying the cost it being
> in two services that are fundamentally different would have been huge
> mistake we would have paid for long time. I'm not saying that it would
> have been impossible, but there is lots of burden in Images APIs that
> Glare really does not need to carry, we just can't get rid of it and
> likely no-one would have been happy to see Images API v3 around the
> time when we are working super hard to get the v1 users moving to v2.
> 
> Packaging glance-api, glance-registry and glare-api from glance repo
> would not change the effort too much compared from 2 repos either.
> Likely it just makes it easier when the logical split it clear from
> the beginning.
> 
> What comes to Tim's statement, I do not see how Glare in it's own
> service with it's own API could ride on the Glance install base apart
> from the quite false mental image these two thing being the same and
> based on the same code.
> 

To give a concrete use case, CERN have Glance deployed for images.  We are 
interested in the ecosystem
around Murano and are actively using Heat.  We deploy using RDO with RPM 
packages, Puppet-OpenStack
for configuration, a set of machines serving Glance in an HA set up across 
multiple data centres  and various open source monitoring tools.

The multitude of projects and the day two maintenance scenarios with 11 
independent projects is a cost and adding further to this cost for the 
production deployments of OpenStack should not be ignored.

By Glare choosing to go their own way, does this mean that

- Can the existing RPM packaging for Glance be used to deploy Glare ? If there 
needs to be new packages defined, this is additional cost for the RDO team (and 
the equivalent .deb teams) or will the Glare team provide this ?
- Can we use our existing templates for Glance for configuration management ? 
If there need to be new ones defined, this is additional work for the Chef and 
Ansible teams or will the Glare team provide this ?
- Log consolidation and parsing using the various OsOps 

[openstack-dev] [all][api] POST /api-wg/news

2016-08-04 Thread michael mccune

Greetings OpenStack community,

This week, the API-WG was visited by Deepti Ramakrishna from the Cinder 
project. Deepti brought up a spec currently being implemented by the 
Cinder team about resource capabilities, "New public core APIs to expose 
system capabilities"[5][6]. The result of this was a great discussion[7] 
about the possibility of creating a new guideline for projects that may 
wish to expose capability specific information in their REST API servers.


I'd like to thank Deepti for bringing this up, and I encourage anyone 
who is interested to have a look at the Cinder spec and hopefully share 
their input about how this might be used by other projects in the 
OpenStack ecosystem.


No new guidelines have been merged or proposed for freeze this week.

# Recently merged guidelines

None

# API guidelines proposed for freeze

The following guidelines are available for broader review by interested 
parties. These will be merged in one week if there is no further feedback.


None this week

# Guidelines currently under review

These are guidelines that the working group are debating and working on 
for consistency and language. We encourage any interested parties to 
join in the conversation.


* Add the beginning of a set of guidlines for URIs
  https://review.openstack.org/#/c/322194/
* Add description of pagination parameters
  https://review.openstack.org/190743

Note that some of these guidelines were introduced quite a long time ago 
and need to either be refreshed by their original authors, or adopted by 
new interested parties. If you're the author of one of these older 
reviews, please come back to it or we'll have to mark it abandoned.


# API Impact reviews currently open

Reviews marked as APIImpact [1] are meant to help inform the working 
group about changes which would benefit from wider inspection by group 
members and liaisons. While the working group will attempt to address 
these reviews whenever possible, it is highly recommended that 
interested parties attend the API-WG meetings [2] to promote 
communication surrounding their reviews.


To learn more about the API WG mission and the work we do, see OpenStack 
API Working Group [3].


Thanks for reading and see you next week!

[1] 
https://review.openstack.org/#/q/status:open+AND+(message:ApiImpact+OR+message:APIImpact),n,z

[2] https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
[3] http://specs.openstack.org/openstack/api-wg/
[4]: https://bugs.launchpad.net/openstack-api-wg
[5]: https://review.openstack.org/#/c/306930/
[6]: https://review.openstack.org/#/c/350310/
[7]: 
http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-3/%23openstack-meeting-3.2016-08-04.log.html#t2016-08-04T16:07:40


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible][security] Adding RHEL 7 STIG to openstack-ansible-security

2016-08-04 Thread Major Hayden
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hey there,

The existing openstack-ansible-security role uses security configurations from 
the Security Technical Implementation Guide (STIG) and the new Red Hat 
Enterprise Linux 7 STIG is due out soon.  The role is currently based on the 
RHEL 6 STIG, and although this works quite well for Ubuntu 14.04, the RHEL 7 
STIG has plenty of improvements that work better with Ubuntu 16.04, CentOS 7 
and RHEL 7.

I'd like to make the following changes around which STIG is applied to each OS:

  * RHEL 6 STIG
- Ubuntu 14.04
  * RHEL 7 STIG
- Ubuntu 16.04
- CentOS 7
- RHEL 7

Challenges
- --

There are a few challenges to rebasing the role on the RHEL 7 STIG:

  * All of the configurations have been renumbered in the new STIG
  * Many of the new configurations have no overlap with the RHEL 6 STIG
  * Users of the role on CentOS 7 / Ubuntu 16.04 will have different 
configurations applied than they did previously
  * The Newton deadline is rapidly approaching

I have a couple of ideas on how to implement this:

Idea #1: Update what exists today
- -
This would keep the same role structure as it stands right now and it would 
intermingle RHEL 6/7 STIGs in the same tasks.  Some tasks are identical between 
both STIGs, but some are different.  It's nice because it's less of an overall 
change, but it could get messy with lots of 'when' statements all over the 
place.

Idea #2: Put a fork in the road
- ---
This would involve restructuring the role so that there's a big fork in 
main.yml. The structure might look something like this:

  /main.yml
  /rhel6/main.yml
  /rhel6/auth.yml
  /rhel6/audit.yml
  /rhel6/...
  /rhel7/main.yml
  /rhel7/auth.yml
  /rhel7/audit.yml

Note that the 'rhel6' directory would contain RHEL 6 STIG content for Ubuntu 
14.04 while the 'rhel7' directory would contain RHEL 7 content for Ubuntu 
16.04, CentOS 7 and RHEL 7.  The root 'main.yml' would have an include line 
that would check the OS and include the correct main.yml from the 'rhel6' or 
'rhel7' directory.

This would involve more changes, and possibly a little bit of repeated tasks 
between the two STIGs.  However, it should be cleaner and easier to maintain.  
when support for UBuntu 14.04 needs to be removed, the 'rhel6' directory could 
be dropped entirely.

Requested feedback
- --
I'd really like to hear feedback from users, especially those who use this role 
on a regular basis.  Here are my questions:

1) Which plan makes the most sense?
2) Is there another idea that makes more sense than these two?

Thanks in advance for your help!  I plan to put a spec together once I get some 
feedback.

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJXo38iAAoJEHNwUeDBAR+xH5YP/0kmhZC4a1FAyV+OlEWcKM4p
qYZhHscgWqtmYHLgX5q51IyGEas9ae89cxF2ThskvF+LZ37+RfwaUAjpCwFR6wgB
AjouNKXWE7skRmNcsfvhU8m19vAdf8DV6qvZzcc8Ii5xxiuNIwKaJKgcMNAWnHww
GndfleJjUFdG4YUGf/I/UFodKuxM0PGjHDxGbCQVEtJsJMTBl0O8CPhTDnk2FFoy
oHtzeemDRyEWwrMgj5meqyxIi6E+LI78Ougoti4TiX32VgsT16mzfjMagqhYspLV
c4fYIfgX8fguGYNfTpKNv9XyeZaNWJWtW8ia7zgeLhuzgLJtyihZl2dd0MGc2qBf
laa7o8lVeUGLwpDGDISewISaL7kZariaVNF3zA59mOYlCN7eVhUsVKaxgG6RANNW
OD+cNA3m6zPgPpcY3FzD6mHD10fcnZLxULiyccGceeetCVB2ibRsEeddPC9rX8lv
uiBlc8Tq8Z808bKWygQC05TcIg/vP7CIO1eHcJwWLnFe5fhQ7Z15pnuaMWZOtMur
dCbp+EIiuLwbpOcRPYTRMrhxYCXsKCoGyKANvEjBROBnbc5T3fjTATkqAXfYQUGy
onogutZ5eF3n4hAzEYbmk1oSW5Z6gZOzvuNB2k98DB0RpT8/X/30BwpIcwutPZ7X
ccaa8MfgA0yDR5x7bH0k
=arAJ
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] stable/liberty busted

2016-08-04 Thread Armando M.
Hi Neutrinos,

I have noticed that Liberty seems to be belly up [1]. I wonder if anyone
knows anything or has the time to look into it.

Many thanks,
Armando

[1] https://review.openstack.org/#/c/349039/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Drivers meeting - switching gear

2016-08-04 Thread Armando M.
Folks,

As some of you may be familiar with, the typical agenda for the drivers
meeting [1] involves the members of the drivers team going over triaged
RFEs. Prior to the meeting we typically process confirmed and new RFEs to
see whether some of them are worth triaging (i.e. discussed during the
weekly meeting).

The confirmed and new pipelines are pretty dry [2,3] at the moment, and the
triaged one [4] is pretty stable. For this reason I would like to switch
gear and take the drivers meeting as an opportunity to go over the Newton
backlog [5]. This means that from now until Ocata opens up we will give
priority to reviewing blueprint statuses, and other high visibility
Stadium-wide efforts like neutron-lib.

Cheers,
Armando

[1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers
[2]
https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=Confirmed=rfe
[3]
https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=New=rfe
[4]
https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=Triaged=rfe
[5] https://blueprints.launchpad.net/neutron/newton/+assignments
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-04 Thread Erno Kuvaja
On Thu, Aug 4, 2016 at 5:20 PM, Clint Byrum  wrote:
> Excerpts from Tim Bell's message of 2016-08-04 15:55:48 +:
>>
>> On 04 Aug 2016, at 17:27, Mikhail Fedosin 
>> > wrote:
>> >
>> > Hi all,
>> > > > after 6 months of Glare v1 API development we have decided to continue
>> > our work in a separate project in the "openstack" namespace with its own
>> > core team (me, Kairat Kushaev, Darja Shkhray and the original creator -
>> > Alexander Tivelkov). We want to thank Glance community for their support
>> > during the incubation period, valuable advice and suggestions - this time
>> > was really productive for us. I believe that this step will allow the
>> > Glare project to concentrate on feature development and move forward
>> > faster. Having the independent service also removes inconsistencies
>> > in understanding what Glance project is: it seems that a single project
>> > cannot own two different APIs with partially overlapping functionality. So
>> > with the separation of Glare into a new project, Glance may continue its
>> > work on the OpenStack Images API, while Glare will become the reference
>> > implementation of the new OpenStack Artifacts API.
>> >
>>
>> I would suggest looking at more than just the development process when
>> reflecting on this choice.
>> While it may allow more rapid development, doing on your own will increase
>> costs for end users and operators in areas like packaging, configuration,
>> monitoring, quota … gaining critical mass in production for Glare will
>> be much more difficult if you are not building on the Glance install base.
>
> I have to agree with Tim here. I respect that it's difficult to build on
> top of Glance's API, rather than just start fresh. But, for operators,
> it's more services, more API's to audit, and more complexity. For users,
> they'll now have two ways to upload software to their clouds, which is
> likely to result in a large portion just ignoring Glare even when it
> would be useful for them.
>
> What I'd hoped when Glare and Glance combined, was that there would be
> a single API that could be used for any software upload and listing. Is
> there any kind of retrospective or documentation somewhere that explains
> why that wasn't possible?
>

I was planning to leave this branch on it's own, but I have to correct
something here. This split is not introducing new API, it's moving the
new Artifact API under it's own project, there was no shared API in
first place. Glare was to be it's own service already within Glance
project. Also the Artifacts API turned out to be fundamentally
incompatible with the Images APIs v1 & v2 due to the totally different
requirements. And even the option was discussed in the community I
personally think replicating Images API and carrying the cost it being
in two services that are fundamentally different would have been huge
mistake we would have paid for long time. I'm not saying that it would
have been impossible, but there is lots of burden in Images APIs that
Glare really does not need to carry, we just can't get rid of it and
likely no-one would have been happy to see Images API v3 around the
time when we are working super hard to get the v1 users moving to v2.

Packaging glance-api, glance-registry and glare-api from glance repo
would not change the effort too much compared from 2 repos either.
Likely it just makes it easier when the logical split it clear from
the beginning.

What comes to Tim's statement, I do not see how Glare in it's own
service with it's own API could ride on the Glance install base apart
from the quite false mental image these two thing being the same and
based on the same code.

- Erno
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Ben Swartzlander

On 08/04/2016 11:57 AM, Fox, Kevin M wrote:

Ok. I'll play devils advocate here and speak to the other side of this, because 
you raised an interesting issue...

Ceph is outside of the tent. It provides a (mostly) api compatible 
implementation of the swift api (radosgw), and it is commonly used in OpenStack 
deployments.

Other OpenStack projects don't take it into account because its not a big tent 
thing, even though it is very common. Because of some rules about only testing 
OpenStack things, radosgw is not tested against even though it is so common.


I call BS on this assertion. We test things that outside the tent in the 
upstream gate all the time -- the only requirement is that they be 
released. We won't test against unreleased stuff that's outside the big 
tent and the reason for that should be obvious.



This causes odd breakages at times that could easily be prevented, but for 
procedural things around the Big Tent.


The only way I can see for "odd breakages" to sneak in is on the Ceph 
side, if they aren't testing their changes against OpenStack and they 
introduce a regression, then that's their fault (assuming of course that 
we have good test coverage running against the latest stable release of 
Ceph). It's reasonable to request that we increase our test coverage 
with Ceph if it's not good enough and if we are the ones causing the 
breakages. But their outside status isn't the problem.


-Ben Swartzlander



I do think this should be fixed before we advocate single vendor projects exit 
the big tent after some time. As the testing situation may be made worse.

Thanks,
Kevin

From: Thierry Carrez [thie...@openstack.org]
Sent: Thursday, August 04, 2016 5:59 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] persistently single-vendor projects

Thomas Goirand wrote:

On 08/01/2016 09:39 AM, Thierry Carrez wrote:

But if a project is persistently single-vendor after some time and
nobody seems interested to join it, the technical value of that project
being "in" OpenStack rather than a separate project in the OpenStack
ecosystem of projects is limited. It's limited for OpenStack (why
provide resources to support a project that is obviously only beneficial
to one organization ?), and it's limited to the organization itself (why
go through the OpenStack-specific open processes when you could shortcut
it with internal tools and meetings ? why accept the oversight of the
Technical Committee ?).


A project can still be useful for everyone with a single vendor
contributing to it, even after a long period of existence. IMO that's
not the issue we're trying to solve.


I agree with that -- open source projects can be useful for everyone
even if only a single vendor contributes to it.

But you seem to imply that the only way an open source project can be
useful is if it's developed as an OpenStack project under the OpenStack
Technical Committee governance. I'm not advocating that these projects
should stop or disappear. I'm just saying that if they are very unlikely
to grow a more diverse affiliation in the future, they derive little
value in being developed under the OpenStack Technical Committee
oversight, and would probably be equally useful if developed outside of
OpenStack official projects governance. There are plenty of projects
that are useful to OpenStack that are not developed under the TC
governance (libvirt, Ceph, OpenvSwitch...)

What is the point for a project to submit themselves to the oversight of
a multi-organization Technical Committee if they always will be the
result of the efforts of a single organization ?

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread John Griffith
On Thu, Aug 4, 2016 at 9:57 AM, Fox, Kevin M  wrote:

> Ok. I'll play devils advocate here and speak to the other side of this,
> because you raised an interesting issue...
>
> Ceph is outside of the tent. It provides a (mostly) api compatible
> implementation of the swift api (radosgw), and it is commonly used in
> OpenStack deployments.
>
> Other OpenStack projects don't take it into account because its not a big
> tent thing, even though it is very common. Because of some rules about only
> testing OpenStack things, radosgw is not tested against even though it is
> so common. This causes odd breakages at times that could easily be
> prevented, but for procedural things around the Big Tent.
>
​I think this statement needs some fact checking.  The reality is that Ceph
is a PERFECT example of a valuable and widely used project in the OpenStack
eco system but doesn't not officially reside in the eco system.  I can
assure you that Cinder and Nova inparticular take into account, almost to
the point of being detrimental to other storage options.

I suspect part of your view stems from issues prior to Ceph being an active
part of CI, which now it is.  The question of testing isn't the same here
IMO.  In the case of Block storage in particular we have all drivers (none
of which but the ref LVM driver being a part of OpenStack governance)
running CI testing.  Granted it's not pretty, but there's nothing keeping
them from implementing CI, running and reporting.  In the case of open
source software based options like Ceph, Gluster, SheepDog etc... those are
all project maintained outside of OpenStack Governance BUT they all have
Infra resources running CI etc.
​


>
> I do think this should be fixed before we advocate single vendor projects
> exit the big tent after some time. As the testing situation may be made
> worse.
>
> Thanks,
> Kevin
> 
> From: Thierry Carrez [thie...@openstack.org]
> Sent: Thursday, August 04, 2016 5:59 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [tc] persistently single-vendor projects
>
> Thomas Goirand wrote:
> > On 08/01/2016 09:39 AM, Thierry Carrez wrote:
> >> But if a project is persistently single-vendor after some time and
> >> nobody seems interested to join it, the technical value of that project
> >> being "in" OpenStack rather than a separate project in the OpenStack
> >> ecosystem of projects is limited. It's limited for OpenStack (why
> >> provide resources to support a project that is obviously only beneficial
> >> to one organization ?), and it's limited to the organization itself (why
> >> go through the OpenStack-specific open processes when you could shortcut
> >> it with internal tools and meetings ? why accept the oversight of the
> >> Technical Committee ?).
> >
> > A project can still be useful for everyone with a single vendor
> > contributing to it, even after a long period of existence. IMO that's
> > not the issue we're trying to solve.
>
> I agree with that -- open source projects can be useful for everyone
> even if only a single vendor contributes to it.
>
> But you seem to imply that the only way an open source project can be
> useful is if it's developed as an OpenStack project under the OpenStack
> Technical Committee governance. I'm not advocating that these projects
> should stop or disappear. I'm just saying that if they are very unlikely
> to grow a more diverse affiliation in the future, they derive little
> value in being developed under the OpenStack Technical Committee
> oversight, and would probably be equally useful if developed outside of
> OpenStack official projects governance. There are plenty of projects
> that are useful to OpenStack that are not developed under the TC
> governance (libvirt, Ceph, OpenvSwitch...)
>
> What is the point for a project to submit themselves to the oversight of
> a multi-organization Technical Committee if they always will be the
> result of the efforts of a single organization ?
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some thoughts on API microversions

2016-08-04 Thread Chris Friesen

On 08/04/2016 09:28 AM, Edward Leafe wrote:


The idea that by specifying a distinct microversion would somehow guarantee
an immutable behavior, though, is simply not the case. We discussed this at
length at the midcycle regarding the dropping of the nova-network code; once
that's dropped, there won't be any way to get that behavior no matter what
microversion you specify. It's gone. We signal this with deprecation notices,
release notes, etc., and it's up to individuals to move away from using that
behavior during this deprecation period. A new microversion will never help
anyone who doesn't follow these signals.


I was unable to attend the midcycle, but that seems to violate the original 
description of how microversions were supposed to work.  As I recall, the 
original intent was something like this:


At time T, we remove an API via microversion X.  We keep the code around to 
support it when using microversions less than X.


At some later time T+i, we bump the minimum microversion up to X.  At this point 
nobody can ever request the older microversions, so we can safely remove the 
server-side code.


Have we given up on this?  Or is nova-network a special-case?

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Erno Kuvaja
On Thu, Aug 4, 2016 at 9:56 AM, Duncan Thomas  wrote:
> On 1 August 2016 at 18:14, Adrian Otto  wrote:
>>
>> I am struggling to understand why we would want to remove projects from
>> our big tent at all, as long as they are being actively developed under the
>> principles of "four opens". It seems to me that working to disqualify such
>> projects sends an alarming signal to our ecosystem. The reason we made the
>> big tent to begin with was to set a tone of inclusion. This whole discussion
>> seems like a step backward. What problem are we trying to solve, exactly?
>
>
> Any project existing in the big tent sets a significant barrier (policy,
> technical, mindshare) of entry to any competing project that might spring
> up. The cost of entry as an individual into a single-vendor project is much
> higher in general than a diverse one (back-channel communications,
> differences in vision, monoculture, commercial pressures, etc), and so
> having a non-diverse project in the big tent reduces the possibilities of a
> better replacement appearing.
>

Actually I couldn't disagree more. Since big tent and stackforge move
under the openstack name space the effect has been exactly the
opposite. Competitors have way less needs to collaborate with each
other to be part of OpenStack as anyone could just kick up their own
project and do it in their way still being part of the
community/ecosystem/what-ever-you-want-to-call-it.

We see projects splitting more when they do not share the core
concepts (which is good thing) but we do not see projects combining
their efforts when they do overlapping things. Maybe we do see this
lack of diversity just growing as long as we don't care about it (tag
here, another there is not going to slow the company behind the
project pushing it to their customers even there was more diverse or
better options, it's still part of OpenStack and it's "ours"). If we
start pushing the projects that are single vendor out of the big tent,
we give more pressure for multiple of those to combine their efforts
rather than continue competing for same thing and if they don't want
to play together I don't see anything wrong to send clear message that
we don't want to share the cost of it.

I personally see the proposed as not limiting the competition to
appear but rather the single-vendor competition might not stick around
when the competing projects would be under a threat to be thrown out.
If someone brings competing project into the ecosystem, 18 months is
also pretty decent time to see if that approach is superior enough (to
attract the diversity) and justify it's existence or if those people
just should try to play with others instead of doing their own thing.

I'm all for the selective inclusion based on meritocracy, not only on
person level, but on project level as well.

- Erno

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread James Bottomley
On Thu, 2016-08-04 at 10:10 +0200, Thierry Carrez wrote:
> Devdatta Kulkarni wrote:
> > As current PTL of one of the projects that has the team:single
> > -vendor tag, I have following thoughts/questions on this issue.
> 
> In preamble I'd like to reiterate that the proposal is not on the 
> table at this stage -- this is just a discussion to see whether it 
> would be a good thing or a bad thing.

I think this is fair enough, plus the idea that the tagging only
triggers review not an automatic eviction is reasonable.  However, I do
have a concern about what you said below.

> > - Is the need for periodically deciding membership in the big tent
> > primarily stemming from the question of managing resources (for the
> > future design summits and cross-project work)?
> 
> No, it's not the primary reason. As I explained elsewhere in the 
> thread, it's more that (from an upstream open source project 
> perspective) OpenStack is not a useful vehicle for open source 
> projects that are and will always be driven by a single vendor. The 
> value we provide (through our governance, principles and infra 
> systems) is in enabling open collaboration between organizations. A 
> project that will clearly always stay single-vendor (for one reason 
> or another) does not get or bring extra technical value by being 
> developed within "OpenStack" (i.e. under the Technical Committee
> oversight).

I don't believe this to be consistent with the OpenStack mission
statement:

to produce the ubiquitous Open Source Cloud Computing platform that 
will meet the needs of public and private clouds regardless of size,
by 
being simple to implement and massively scalable.

OpenStack chooses to implement this mission statement by insisting on
Openness via the four opens and by facilitating a collaborative
environment for every project. I interpret the above to mean any
OpenStack project must be open and must bring technical benefit to
public and private clouds of any size; so I don't think a statement
that even if a project satisfies your openness requirements, the fact
that it must also derive technical benefit from the infrastructure
you've put in place can be supported by any construction of the above
mission statement.

The other thing that really bothers me is that it changes the
assessment of value to OpenStack from being extrinsic (brings technical
benefit to public and private cloud) to being intrinsic (must derive
technical benefit from our infrastructure) and I find non-extrinsic
measures suspect because they can lead to self-perpetuation.

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Erno Kuvaja
Kevin,

What do you mean by "Other OpenStack projects don't take it into
account because its not a big tent thing"? I think there is pretty
decent adoption of Ceph across the projects where it would make sense.
Also I doubt none of them would be against 3rd party Ceph gates to
those project if the Ceph community felt that the testing is not
sufficient. We have for example Cinder being brilliant example of
demanding driver providers providing CI for their backends.

Would such demand for 3rd party CI across OpenStack rather than just
Cinder answer your concerns of the testing and how far we are willing
to take that?

- Erno

On Thu, Aug 4, 2016 at 4:57 PM, Fox, Kevin M  wrote:
> Ok. I'll play devils advocate here and speak to the other side of this, 
> because you raised an interesting issue...
>
> Ceph is outside of the tent. It provides a (mostly) api compatible 
> implementation of the swift api (radosgw), and it is commonly used in 
> OpenStack deployments.
>
> Other OpenStack projects don't take it into account because its not a big 
> tent thing, even though it is very common. Because of some rules about only 
> testing OpenStack things, radosgw is not tested against even though it is so 
> common. This causes odd breakages at times that could easily be prevented, 
> but for procedural things around the Big Tent.
>
> I do think this should be fixed before we advocate single vendor projects 
> exit the big tent after some time. As the testing situation may be made worse.
>
> Thanks,
> Kevin
> 
> From: Thierry Carrez [thie...@openstack.org]
> Sent: Thursday, August 04, 2016 5:59 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [tc] persistently single-vendor projects
>
> Thomas Goirand wrote:
>> On 08/01/2016 09:39 AM, Thierry Carrez wrote:
>>> But if a project is persistently single-vendor after some time and
>>> nobody seems interested to join it, the technical value of that project
>>> being "in" OpenStack rather than a separate project in the OpenStack
>>> ecosystem of projects is limited. It's limited for OpenStack (why
>>> provide resources to support a project that is obviously only beneficial
>>> to one organization ?), and it's limited to the organization itself (why
>>> go through the OpenStack-specific open processes when you could shortcut
>>> it with internal tools and meetings ? why accept the oversight of the
>>> Technical Committee ?).
>>
>> A project can still be useful for everyone with a single vendor
>> contributing to it, even after a long period of existence. IMO that's
>> not the issue we're trying to solve.
>
> I agree with that -- open source projects can be useful for everyone
> even if only a single vendor contributes to it.
>
> But you seem to imply that the only way an open source project can be
> useful is if it's developed as an OpenStack project under the OpenStack
> Technical Committee governance. I'm not advocating that these projects
> should stop or disappear. I'm just saying that if they are very unlikely
> to grow a more diverse affiliation in the future, they derive little
> value in being developed under the OpenStack Technical Committee
> oversight, and would probably be equally useful if developed outside of
> OpenStack official projects governance. There are plenty of projects
> that are useful to OpenStack that are not developed under the TC
> governance (libvirt, Ceph, OpenvSwitch...)
>
> What is the point for a project to submit themselves to the oversight of
> a multi-organization Technical Committee if they always will be the
> result of the efforts of a single organization ?
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some thoughts on API microversions

2016-08-04 Thread John Garbutt
On 4 August 2016 at 14:18, Andrew Laski  wrote:
> On Thu, Aug 4, 2016, at 08:20 AM, Sean Dague wrote:
>> On 08/03/2016 08:54 PM, Andrew Laski wrote:
>> > I've brought some of these thoughts up a few times in conversations
>> > where the Nova team is trying to decide if a particular change warrants
>> > a microversion. I'm sure I've annoyed some people by this point because
>> > it wasn't germane to those discussions. So I'll lay this out in it's own
>> > thread.
>> >
>> > I am a fan of microversions. I think they work wonderfully to express
>> > when a resource representation changes, or when different data is
>> > required in a request. This allows clients to make the same request
>> > across multiple clouds and expect the exact same response format,
>> > assuming those clouds support that particular microversion. I also think
>> > they work well to express that a new resource is available. However I do
>> > think think they have some shortcomings in expressing that a resource
>> > has been removed. But in short I think microversions work great for
>> > expressing that there have been changes to the structure and format of
>> > the API.
>> >
>> > I think microversions are being overused as a signal for other types of
>> > changes in the API because they are the only tool we have available. The
>> > most recent example is a proposal to allow the revert_resize API call to
>> > work when a resizing instance ends up in an error state. I consider
>> > microversions to be problematic for changes like that because we end up
>> > in one of two situations:
>> >
>> > 1. The microversion is a signal that the API now supports this action,
>> > but users can perform the action at any microversion. What this really
>> > indicates is that the deployment being queried has upgraded to a certain
>> > point and has a new capability. The structure and format of the API have
>> > not changed so an API microversion is the wrong tool here. And the
>> > expected use of a microversion, in my opinion, is to demarcate that the
>> > API is now different at this particular point.
>> >
>> > 2. The microversion is a signal that the API now supports this action,
>> > and users are restricted to using it only on or after that microversion.
>> > In many cases this is an artificial constraint placed just to satisfy
>> > the expectation that the API does not change before the microversion.
>> > But the reality is that if the API change was exposed to every
>> > microversion it does not affect the ability I lauded above of a client
>> > being able to send the same request and receive the same response from
>> > disparate clouds. In other words exposing the new action for all
>> > microversions does not affect the interoperability story of Nova which
>> > is the real use case for microversions. I do recognize that the
>> > situation may be more nuanced and constraining the action to specific
>> > microversions may be necessary, but that's not always true.
>> >
>> > In case 1 above I think we could find a better way to do this. And I
>> > don't think we should do case 2, though there may be special cases that
>> > warrant it.
>> >
>> > As possible alternate signalling methods I would like to propose the
>> > following for consideration:
>> >
>> > Exposing capabilities that a user is allowed to use. This has been
>> > discussed before and there is general agreement that this is something
>> > we would like in Nova. Capabilities will programatically inform users
>> > that a new action has been added or an existing action can be performed
>> > in more cases, like revert_resize. With that in place we can avoid the
>> > ambiguous use of microversions to do that. In the meantime I would like
>> > the team to consider not using microversions for this case. We have
>> > enough of them being added that I think for now we could just wait for
>> > the next microversion after a capability is added and document the new
>> > capability there.
>>
>> The problem with this approach is that the capability add isn't on a
>> microversion boundary, as long as we continue to believe that we want to
>> support CD deployments this means people can deploy code with the
>> behavior change, that's not documented or signaled any way.

+1

I do wonder if we want to relax our support of CD, to some extent, but
thats a different thread.

> The fact that the capability add isn't on a microversion boundary is
> exactly my point. There's no need for it to be in many cases. But it
> would only apply for capability adds which don't affect the
> interoperability of multiple deployments.
>
> The signaling would come from the ability to query the capabilities
> listing. A change in what that listing returns indicates a behavior
> change.
>
> Another reason I like the above mechanism is that it handles differences
> in policy better as well. As much as we say that two clouds with the
> same microversions available should accept the same requests and return
> the 

Re: [openstack-dev] [Nova] Some thoughts on API microversions

2016-08-04 Thread Andrew Laski


On Thu, Aug 4, 2016, at 11:40 AM, John Garbutt wrote:
> On 4 August 2016 at 16:28, Edward Leafe  wrote:
> > On Aug 4, 2016, at 8:18 AM, Andrew Laski  wrote:
> >
> >> This gets to the point I'm trying to make. We don't guarantee old
> >> behavior in all cases at which point users can no longer rely on
> >> microversions to signal non breaking changes. And where we do guarantee
> >> old behavior sometimes we do it artificially because the only signal we
> >> have is microversions and that's the contract we're trying to adhere to.
> >
> > I've always understood microversions to be a way to prevent breaking an 
> > automated tool when we change either the input or output of our API. Its 
> > benefit was less clear for the case of adding a new API, since there is no 
> > chance of breaking something that would never call it. We also accept that 
> > a bug fix doesn't require a microversion bump, as users should *never* be 
> > expecting a 5xx response, so not only does fixing that not need a bump, but 
> > such fixes can be backported to affect all microversions.
> >
> > The idea that by specifying a distinct microversion would somehow guarantee 
> > an immutable behavior, though, is simply not the case. We discussed this at 
> > length at the midcycle regarding the dropping of the nova-network code; 
> > once that's dropped, there won't be any way to get that behavior no matter 
> > what microversion you specify. It's gone. We signal this with deprecation 
> > notices, release notes, etc., and it's up to individuals to move away from 
> > using that behavior during this deprecation period. A new microversion will 
> > never help anyone who doesn't follow these signals.
> >
> > In the case that triggered this thread [0], the change was completely on 
> > the server side of things; no change to either the request or response of 
> > the API. It simply allowed a failed resize to be recovered more easily. 
> > That's a behavior change, not an API change, and frankly, I can't imagine 
> > anyone who would ever *want* the old behavior of leaving an instance in an 
> > error state. To me, that's not very different than fixing a 5xx response, 
> > as it is correcting an error on the server side.
> >
> 
> The problem is was thinking about is, how do you know if a cloud
> supports that new behaviour? For me, a microversion does help to
> advertise that. Its probably a good example of where its not important
> enough to add a new capability to tell people thats possible.

I do see this as a capability though. I've been thinking of capabilities
as an answer to the question of "what can I do with this resource?" So a
capability query to an instance that errored during resize might
currently just return ['delete', 'call admin(joking)'] and assuming we
relax the restriction it would return ['delete', 'revert_resize'].

> 
> That triggers the follow up question, of is that important in this
> case, could you just make the call and see if it works?

Sure. Until we have more discoverability in the API this is the reality
of what users need to do due to things like policy checks.

What I'm aiming for is discoverability that works well for users. The
current situation is a new microversion means go check the docs or
release notes and where I'd like to be is a new microversion means check
the provided API schemas, and a new/removed capability expresses a
change in behavior. And if there are other types of changes users should
be aware of that we think about the right mechanism for exposing it. All
I'm saying is all we have is a hammer, is everything we're using it on
really a nail? :)

> 
> Thanks,
> johnthetubaguy
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Adding opensuse as new driver to Magnum

2016-08-04 Thread Murali Allada
Michal,

The right place for drivers is the /drivers folder.

Take a look at the existing drivers as an examples. You'll also need to update 
this file https://github.com/openstack/magnum/blob/master/setup.cfg#L60 
and add a new entry point for the driver.

I would encourage you to hold off on this patch. We are currently working on 
using stevedore to load drivers and moving all the heat stack creation and 
update operations to each driver.

-Murali


From: Michal Jura 
Sent: Thursday, August 4, 2016 3:26 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Magnum] Adding opensuse as new driver to Magnum

Hi,

I would like to put for discussion adding new driver to Magnum. We would
like to propose opensuse with kubernetes as driver. I did some initial
work in bug

https://launchpad.net/bugs/1600197
and changes
https://review.openstack.org/339480
https://review.openstack.org/349994

I've got also some comments from you about how this should be proceed.

As maintainer for this change I can propose myself.

I have couple question about moving this driver to /contrib directory.
If I will do this how this driver should be installed from there?

Thank you for all answers and help with doing this,

Best regards,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-04 Thread Clint Byrum
Excerpts from Tim Bell's message of 2016-08-04 15:55:48 +:
> 
> On 04 Aug 2016, at 17:27, Mikhail Fedosin 
> > wrote:
> > 
> > Hi all,
> > > > after 6 months of Glare v1 API development we have decided to continue
> > our work in a separate project in the "openstack" namespace with its own
> > core team (me, Kairat Kushaev, Darja Shkhray and the original creator -
> > Alexander Tivelkov). We want to thank Glance community for their support
> > during the incubation period, valuable advice and suggestions - this time
> > was really productive for us. I believe that this step will allow the
> > Glare project to concentrate on feature development and move forward
> > faster. Having the independent service also removes inconsistencies
> > in understanding what Glance project is: it seems that a single project
> > cannot own two different APIs with partially overlapping functionality. So
> > with the separation of Glare into a new project, Glance may continue its
> > work on the OpenStack Images API, while Glare will become the reference
> > implementation of the new OpenStack Artifacts API.
> > 
> 
> I would suggest looking at more than just the development process when
> reflecting on this choice.
> While it may allow more rapid development, doing on your own will increase
> costs for end users and operators in areas like packaging, configuration,
> monitoring, quota … gaining critical mass in production for Glare will
> be much more difficult if you are not building on the Glance install base.

I have to agree with Tim here. I respect that it's difficult to build on
top of Glance's API, rather than just start fresh. But, for operators,
it's more services, more API's to audit, and more complexity. For users,
they'll now have two ways to upload software to their clouds, which is
likely to result in a large portion just ignoring Glare even when it
would be useful for them.

What I'd hoped when Glare and Glance combined, was that there would be
a single API that could be used for any software upload and listing. Is
there any kind of retrospective or documentation somewhere that explains
why that wasn't possible?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Project mascots update

2016-08-04 Thread Heidi Joy Tretheway
Steve Dake asked: “Heidi, I received a related question from a Kolla core 
reviewer. Do the project teams have input into the mascot design? If this has 
already been discussed on the list, I may have missed it.”

HJ: While we haven’t discussed this on the ML, we referenced a bit in the web 
page. Our intent is to create a family of logos that have a distinctive design 
style that is unique to OpenStack community projects. That illustration style 
is something my colleague Todd Morey (OSF creative director) has been working 
on with several professional illustrators, choosing the best of the bunch, and 
developing a handful of logos that will serve as guideposts for the rest of 
design. 

We don’t have illustration resources sufficient to seek team input on 
individual designs—that’s why we emphasized team input on the mascot concept. 
(As I told Steve Hardy, I’m happy to talk to anyone who has special requests on 
how their mascot is portrayed.) Also, while nearly everyone has an opinion 
about design, those opinions will be varied (and often contradictory). I hope 
you’ll be willing to trust the excellent design leadership that produced our 
Summit designs, craveable shirts and stickers, and OpenStack's evolving visual 
identity. That said, I’ll be thrilled to hear from anyone with opinions on 
design to find out more about your perspective. I love talking about this 
stuff. 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-04 Thread Erno Kuvaja
On Thu, Aug 4, 2016 at 4:46 PM, Alexander Tivelkov
 wrote:
> I am the one who started the initiative 2.5 years ago, and was always
> advocating the "let's stay in Glance" approach during numerous discussions
> on "where should it belong" for all these years.
> Now I believe that it is time to move forward indeed. Some things remain to
> be defined (first of all the differences and responsibility sharing between
> Images and Artifacts APIs), but I am fully supportive of this move and
> strongly believe it is a step in a right direction. Thanks Mike, Nikhil,
> Flavio, Erno, Stuart, Brian and all others who helped Glare on this rough
> path.
>
>
> On Thu, Aug 4, 2016 at 6:29 PM Mikhail Fedosin 
> wrote:
>>
>> Hi all,
>> after 6 months of Glare v1 API development we have decided to continue our
>> work in a separate project in the "openstack" namespace with its own core
>> team (me, Kairat Kushaev, Darja Shkhray and the original creator - Alexander
>> Tivelkov). We want to thank Glance community for their support during the
>> incubation period, valuable advice and suggestions - this time was really
>> productive for us. I believe that this step will allow the Glare project to
>> concentrate on feature development and move forward faster. Having the
>> independent service also removes inconsistencies in understanding what
>> Glance project is: it seems that a single project cannot own two different
>> APIs with partially overlapping functionality. So with the separation of
>> Glare into a new project, Glance may continue its work on the OpenStack
>> Images API, while Glare will become the reference implementation of the new
>> OpenStack Artifacts API.
>>
>> Nevertheless, Glare team would like to continue to collaborate with the
>> Glance team in a new - cross-project - format. We still have lots in common,
>> both in code and usage scenarios, so we are looking forward for fruitful
>> work with the rest of the Glance team. Those of you guys who are interested
>> in Glare and the future of Artifacts API are also welcome to join the Glare
>> team: we have a lot of really exciting tasks and will always welcome new
>> members.
>> Meanwhile, despite the fact that my focus will be on the new project, I
>> will continue to be part of the Glance team and for sure I'm going to
>> contribute in Glance, because I am interested in this project and want to
>> help it be successful.
>>
>> We'll have the formal patches pushed to project-config earlier next week,
>> appropriate repositories, wiki and launchpad space will be created soon as
>> well.  Our regular weekly IRC meeting remains intact: it is 17:30 UTC
>> Mondays in #openstack-meeting-alt, it will just become a Glare project
>> meeting instead of a Glare sub-team meeting. Please feel free to join!
>>
>> Best regards,
>> Mikhail Fedosin
>>
>> P.S. For those of you who may be curious on the project name. We'll still
>> be called "Glare", but since we are on our own now this acronym becomes
>> recursive: GLARE now stands for "GLare Artifact REpository" :)
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> Regards,
> Alexander Tivelkov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Glare is dead, Long Live Glare!

It's sad to see the child moving out of the house, but I'm sure this
is the right thing to do to keep the momentum ongoing while she
matures.

- Erno

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-04 Thread Tim Bell

On 04 Aug 2016, at 17:27, Mikhail Fedosin 
> wrote:

Hi all,
after 6 months of Glare v1 API development we have decided to continue our work 
in a separate project in the "openstack" namespace with its own core team (me, 
Kairat Kushaev, Darja Shkhray and the original creator - Alexander Tivelkov). 
We want to thank Glance community for their support during the incubation 
period, valuable advice and suggestions - this time was really productive for 
us. I believe that this step will allow the Glare project to concentrate on 
feature development and move forward faster. Having the independent service 
also removes inconsistencies in understanding what Glance project is: it seems 
that a single project cannot own two different APIs with partially overlapping 
functionality. So with the separation of Glare into a new project, Glance may 
continue its work on the OpenStack Images API, while Glare will become the 
reference implementation of the new OpenStack Artifacts API.


I would suggest looking at more than just the development process when 
reflecting on this choice.
While it may allow more rapid development, doing on your own will increase 
costs for end users and operators in areas like packaging, configuration, 
monitoring, quota … gaining critical mass in production for Glare will be much 
more difficult if you are not building on the Glance install base.
Tim
Nevertheless, Glare team would like to continue to collaborate with the Glance 
team in a new - cross-project - format. We still have lots in common, both in 
code and usage scenarios, so we are looking forward for fruitful work with the 
rest of the Glance team. Those of you guys who are interested in Glare and the 
future of Artifacts API are also welcome to join the Glare team: we have a lot 
of really exciting tasks and will always welcome new members.
Meanwhile, despite the fact that my focus will be on the new project, I will 
continue to be part of the Glance team and for sure I'm going to contribute in 
Glance, because I am interested in this project and want to help it be 
successful.

We'll have the formal patches pushed to project-config earlier next week, 
appropriate repositories, wiki and launchpad space will be created soon as 
well.  Our regular weekly IRC meeting remains intact: it is 17:30 UTC Mondays 
in #openstack-meeting-alt, it will just become a Glare project meeting instead 
of a Glare sub-team meeting. Please feel free to join!

Best regards,
Mikhail Fedosin

P.S. For those of you who may be curious on the project name. We'll still be 
called "Glare", but since we are on our own now this acronym becomes recursive: 
GLARE now stands for "GLare Artifact REpository" :)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Fox, Kevin M
Ok. I'll play devils advocate here and speak to the other side of this, because 
you raised an interesting issue...

Ceph is outside of the tent. It provides a (mostly) api compatible 
implementation of the swift api (radosgw), and it is commonly used in OpenStack 
deployments.

Other OpenStack projects don't take it into account because its not a big tent 
thing, even though it is very common. Because of some rules about only testing 
OpenStack things, radosgw is not tested against even though it is so common. 
This causes odd breakages at times that could easily be prevented, but for 
procedural things around the Big Tent.

I do think this should be fixed before we advocate single vendor projects exit 
the big tent after some time. As the testing situation may be made worse.

Thanks,
Kevin

From: Thierry Carrez [thie...@openstack.org]
Sent: Thursday, August 04, 2016 5:59 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] persistently single-vendor projects

Thomas Goirand wrote:
> On 08/01/2016 09:39 AM, Thierry Carrez wrote:
>> But if a project is persistently single-vendor after some time and
>> nobody seems interested to join it, the technical value of that project
>> being "in" OpenStack rather than a separate project in the OpenStack
>> ecosystem of projects is limited. It's limited for OpenStack (why
>> provide resources to support a project that is obviously only beneficial
>> to one organization ?), and it's limited to the organization itself (why
>> go through the OpenStack-specific open processes when you could shortcut
>> it with internal tools and meetings ? why accept the oversight of the
>> Technical Committee ?).
>
> A project can still be useful for everyone with a single vendor
> contributing to it, even after a long period of existence. IMO that's
> not the issue we're trying to solve.

I agree with that -- open source projects can be useful for everyone
even if only a single vendor contributes to it.

But you seem to imply that the only way an open source project can be
useful is if it's developed as an OpenStack project under the OpenStack
Technical Committee governance. I'm not advocating that these projects
should stop or disappear. I'm just saying that if they are very unlikely
to grow a more diverse affiliation in the future, they derive little
value in being developed under the OpenStack Technical Committee
oversight, and would probably be equally useful if developed outside of
OpenStack official projects governance. There are plenty of projects
that are useful to OpenStack that are not developed under the TC
governance (libvirt, Ceph, OpenvSwitch...)

What is the point for a project to submit themselves to the oversight of
a multi-organization Technical Committee if they always will be the
result of the efforts of a single organization ?

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-04 Thread Alexander Tivelkov
I am the one who started the initiative 2.5 years ago, and was always
advocating the "let's stay in Glance" approach during numerous discussions
on "where should it belong" for all these years.
Now I believe that it is time to move forward indeed. Some things remain to
be defined (first of all the differences and responsibility sharing between
Images and Artifacts APIs), but I am fully supportive of this move and
strongly believe it is a step in a right direction. Thanks Mike, Nikhil,
Flavio, Erno, Stuart, Brian and all others who helped Glare on this rough
path.


On Thu, Aug 4, 2016 at 6:29 PM Mikhail Fedosin 
wrote:

> Hi all,
> after 6 months of Glare v1 API development we have decided to continue our
> work in a separate project in the "openstack" namespace with its own core
> team (me, Kairat Kushaev, Darja Shkhray and the original creator -
> Alexander Tivelkov). We want to thank Glance community for their support
> during the incubation period, valuable advice and suggestions - this time
> was really productive for us. I believe that this step will allow the Glare
> project to concentrate on feature development and move forward faster.
> Having the independent service also removes inconsistencies in
> understanding what Glance project is: it seems that a single project cannot
> own two different APIs with partially overlapping functionality. So with
> the separation of Glare into a new project, Glance may continue its work on
> the OpenStack Images API, while Glare will become the reference
> implementation of the new OpenStack Artifacts API.
>
> Nevertheless, Glare team would like to continue to collaborate with the
> Glance team in a new - cross-project - format. We still have lots in
> common, both in code and usage scenarios, so we are looking forward for
> fruitful work with the rest of the Glance team. Those of you guys who are
> interested in Glare and the future of Artifacts API are also welcome to
> join the Glare team: we have a lot of really exciting tasks and will always
> welcome new members.
> Meanwhile, despite the fact that my focus will be on the new project, I
> will continue to be part of the Glance team and for sure I'm going to
> contribute in Glance, because I am interested in this project and want to
> help it be successful.
>
> We'll have the formal patches pushed to project-config earlier next week,
> appropriate repositories, wiki and launchpad space will be created soon as
> well.  Our regular weekly IRC meeting remains intact: it is 17:30 UTC
> Mondays in #openstack-meeting-alt, it will just become a Glare project
> meeting instead of a Glare sub-team meeting. Please feel free to join!
>
> Best regards,
> Mikhail Fedosin
>
> P.S. For those of you who may be curious on the project name. We'll still
> be called "Glare", but since we are on our own now this acronym becomes
> recursive: GLARE now stands for "GLare Artifact REpository" :)
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Regards,
Alexander Tivelkov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some thoughts on API microversions

2016-08-04 Thread John Garbutt
On 4 August 2016 at 16:28, Edward Leafe  wrote:
> On Aug 4, 2016, at 8:18 AM, Andrew Laski  wrote:
>
>> This gets to the point I'm trying to make. We don't guarantee old
>> behavior in all cases at which point users can no longer rely on
>> microversions to signal non breaking changes. And where we do guarantee
>> old behavior sometimes we do it artificially because the only signal we
>> have is microversions and that's the contract we're trying to adhere to.
>
> I've always understood microversions to be a way to prevent breaking an 
> automated tool when we change either the input or output of our API. Its 
> benefit was less clear for the case of adding a new API, since there is no 
> chance of breaking something that would never call it. We also accept that a 
> bug fix doesn't require a microversion bump, as users should *never* be 
> expecting a 5xx response, so not only does fixing that not need a bump, but 
> such fixes can be backported to affect all microversions.
>
> The idea that by specifying a distinct microversion would somehow guarantee 
> an immutable behavior, though, is simply not the case. We discussed this at 
> length at the midcycle regarding the dropping of the nova-network code; once 
> that's dropped, there won't be any way to get that behavior no matter what 
> microversion you specify. It's gone. We signal this with deprecation notices, 
> release notes, etc., and it's up to individuals to move away from using that 
> behavior during this deprecation period. A new microversion will never help 
> anyone who doesn't follow these signals.
>
> In the case that triggered this thread [0], the change was completely on the 
> server side of things; no change to either the request or response of the 
> API. It simply allowed a failed resize to be recovered more easily. That's a 
> behavior change, not an API change, and frankly, I can't imagine anyone who 
> would ever *want* the old behavior of leaving an instance in an error state. 
> To me, that's not very different than fixing a 5xx response, as it is 
> correcting an error on the server side.
>

The problem is was thinking about is, how do you know if a cloud
supports that new behaviour? For me, a microversion does help to
advertise that. Its probably a good example of where its not important
enough to add a new capability to tell people thats possible.

That triggers the follow up question, of is that important in this
case, could you just make the call and see if it works?

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some thoughts on API microversions

2016-08-04 Thread Edward Leafe
On Aug 4, 2016, at 8:18 AM, Andrew Laski  wrote:

> This gets to the point I'm trying to make. We don't guarantee old
> behavior in all cases at which point users can no longer rely on
> microversions to signal non breaking changes. And where we do guarantee
> old behavior sometimes we do it artificially because the only signal we
> have is microversions and that's the contract we're trying to adhere to.

I've always understood microversions to be a way to prevent breaking an 
automated tool when we change either the input or output of our API. Its 
benefit was less clear for the case of adding a new API, since there is no 
chance of breaking something that would never call it. We also accept that a 
bug fix doesn't require a microversion bump, as users should *never* be 
expecting a 5xx response, so not only does fixing that not need a bump, but 
such fixes can be backported to affect all microversions.

The idea that by specifying a distinct microversion would somehow guarantee an 
immutable behavior, though, is simply not the case. We discussed this at length 
at the midcycle regarding the dropping of the nova-network code; once that's 
dropped, there won't be any way to get that behavior no matter what 
microversion you specify. It's gone. We signal this with deprecation notices, 
release notes, etc., and it's up to individuals to move away from using that 
behavior during this deprecation period. A new microversion will never help 
anyone who doesn't follow these signals.

In the case that triggered this thread [0], the change was completely on the 
server side of things; no change to either the request or response of the API. 
It simply allowed a failed resize to be recovered more easily. That's a 
behavior change, not an API change, and frankly, I can't imagine anyone who 
would ever *want* the old behavior of leaving an instance in an error state. To 
me, that's not very different than fixing a 5xx response, as it is correcting 
an error on the server side.

So if you haven't guessed by now, I agree with Andrew that microversions are 
not the best choice for signaling all changes. I'm not sure that adding either 
of the things he proposed would address the issue in [0], though.

[0] http://lists.openstack.org/pipermail/openstack-dev/2016-August/100606.html


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-04 Thread Mikhail Fedosin
Hi all,
after 6 months of Glare v1 API development we have decided to continue our
work in a separate project in the "openstack" namespace with its own core
team (me, Kairat Kushaev, Darja Shkhray and the original creator -
Alexander Tivelkov). We want to thank Glance community for their support
during the incubation period, valuable advice and suggestions - this time
was really productive for us. I believe that this step will allow the Glare
project to concentrate on feature development and move forward faster.
Having the independent service also removes inconsistencies in
understanding what Glance project is: it seems that a single project cannot
own two different APIs with partially overlapping functionality. So with
the separation of Glare into a new project, Glance may continue its work on
the OpenStack Images API, while Glare will become the reference
implementation of the new OpenStack Artifacts API.

Nevertheless, Glare team would like to continue to collaborate with the
Glance team in a new - cross-project - format. We still have lots in
common, both in code and usage scenarios, so we are looking forward for
fruitful work with the rest of the Glance team. Those of you guys who are
interested in Glare and the future of Artifacts API are also welcome to
join the Glare team: we have a lot of really exciting tasks and will always
welcome new members.
Meanwhile, despite the fact that my focus will be on the new project, I
will continue to be part of the Glance team and for sure I'm going to
contribute in Glance, because I am interested in this project and want to
help it be successful.

We'll have the formal patches pushed to project-config earlier next week,
appropriate repositories, wiki and launchpad space will be created soon as
well.  Our regular weekly IRC meeting remains intact: it is 17:30 UTC
Mondays in #openstack-meeting-alt, it will just become a Glare project
meeting instead of a Glare sub-team meeting. Please feel free to join!

Best regards,
Mikhail Fedosin

P.S. For those of you who may be curious on the project name. We'll still
be called "Glare", but since we are on our own now this acronym becomes
recursive: GLARE now stands for "GLare Artifact REpository" :)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [horizon-plugin] AngularJS 1.5.8

2016-08-04 Thread Tripp, Travis S
+1

On 8/3/16, 5:36 PM, "Thomas Goirand"  wrote:

On 08/03/2016 12:19 PM, Rob Cresswell wrote:
> Hi all,
> 
> Angular 1.5.8 is now updated in its XStatic
> repo: https://github.com/openstack/xstatic-angular
> 
> I've done some manual testing of the angular content and found no issues
> so far. I'll be checking that the JS tests and integration tests pass
> too; if they do, would it be desirable to release 1.5.8 this week, or
> wait until after N is released? It'd be nice to be in sync with current
> stable, but I don't want to cause unnecessary work a few weeks before
> plugin FF.
> 
> Thoughts?
> 
> Rob

Please go ahead. Debian Sid has 1.5.5, so even if we don't want to,
Debian will be using that version. It's better to be in sync.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nova compute reports the disk usage

2016-08-04 Thread Tuan Luong
Hi Chris,

Actually, the system does not belong to us therefore we only had the log file. 
But it seems reasonable that besides the size of ephemeral diisk in _base, we 
also have the image of vm downloaded from Glance that consumes the disk 
capacity. That can make the strange value of disk_over_commit.

I will check this situation later. Btw, thank you for your help.

Tuan

-Original Message-
From: Chris Friesen [mailto:chris.frie...@windriver.com] 
Sent: August 04, 2016 15:57
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Nova compute reports the disk usage

On 08/04/2016 02:42 AM, Tuan Luong wrote:
> Hi Chris,
>
> Yes we used qcow image. However I still have something not clear. What I know 
> is that:
>
> - disk_available_least = free_disk - disk_over_commit
> - disk_over_commit = int(virt_size) - physical_disk_size
>
> Before launching:
> - free_disk = 1025
> - disk_available_least = 1016
>
> If we use the normal flavor with 600Gb disk and no ephemeral., the result is 
> good with both above value subtract 600Gb.
> If we use the flavor without disk but 600Gb ephemeral, the result of 
> disk_available_least = -184.

Just a thought...did you clear out the _base directory between tests?  Or is 
there maybe an extra backing file sitting there consuming space?

Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Gluon] Face-to-Face Meeting Planned on August 18 & 19 in Silicon Valley

2016-08-04 Thread HU, BIN
Hello team,

During the IRC meeting on August 3 [1], the group agreed to have a face-to-face 
meeting on August 18 & 19 in Silicon Valley to discuss various topics and set 
up schedule for a potential demo in OpenStack Summit in Barcelona.

The meeting logistics is available at [2], and tentative agenda can be found at 
[3]. If you plan to attend the meeting, please follow the protocol outlined in 
[2].

Please join us, and we look forward to meeting everyone.

Thank you
Bin

[1] 
http://eavesdrop.openstack.org/meetings/gluon/2016/gluon.2016-08-03-18.00.html
[2] https://wiki.openstack.org/wiki/Meetings/Gluon/Logistics-2016081819
[3] https://wiki.openstack.org/wiki/Meetings/Gluon/Agenda-2016081819

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-capabilities library created

2016-08-04 Thread Jay Pipes

On 08/04/2016 10:31 AM, Jim Rollenhagen wrote:

On Wed, Aug 03, 2016 at 07:47:37PM -0400, Jay Pipes wrote:

Hi Novas and anyone interested in how to represent capabilities in a
consistent fashion.

I spent an hour creating a new os-capabilities Python library this evening:

http://github.com/jaypipes/os-capabilities

Please see the README for examples of how the library works and how I'm
thinking of structuring these capability strings and symbols. I intend
os-capabilities to be the place where the OpenStack community catalogs and
collates standardized features for hardware, devices, networks, storage,
hypervisors, etc.

Let me know what you think about the structure of the library and whether
you would be interested in owning additions to the library of constants in
your area of expertise.

Next steps for the library include:

* Bringing in other top-level namespaces like disk: or net: and working with
contributors to fill in the capability strings and symbols.
* Adding constraints functionality to the library. For instance, building in
information to the os-capabilities interface that would allow a set of
capabilities to be cross-checked for set violations. As an example, a
resource provider having DISK_GB inventory cannot have *both* the disk:ssd
*and* the disk:hdd capability strings associated with it -- clearly the disk
storage is either SSD or spinning disk.


Well, if we constrain ourselves to VMs, yes. :)


I wasn't constraining ourselves to VMs :)


One of the issues with running ironic behind nova is that there isn't
any way to express that a flavor (or instance) has (or should have)
multiple physical disks. It would certainly be possible to boot a
baremetal machine that does have SSD and spinning rust.

I don't have a solution in mind here, just wanted to point out that we
need to keep more than VMs in mind when talking about capabilities. :)


Note that in the above, I am explicit that the disk:hdd and disk:ssd 
capabilities should not be provided by a resource provider **that has an 
inventory of DISK_GB resources** :)


Ironic baremetal nodes do not have an inventory record of DISK_GB. 
Instead, the resource class is dynamic -- e.g. IRON_SILVER. The 
constraint of not having disk:hdd and disk:ssd wouldn't apply in that case.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-capabilities library created

2016-08-04 Thread Jim Rollenhagen
On Wed, Aug 03, 2016 at 07:47:37PM -0400, Jay Pipes wrote:
> Hi Novas and anyone interested in how to represent capabilities in a
> consistent fashion.
> 
> I spent an hour creating a new os-capabilities Python library this evening:
> 
> http://github.com/jaypipes/os-capabilities
> 
> Please see the README for examples of how the library works and how I'm
> thinking of structuring these capability strings and symbols. I intend
> os-capabilities to be the place where the OpenStack community catalogs and
> collates standardized features for hardware, devices, networks, storage,
> hypervisors, etc.
> 
> Let me know what you think about the structure of the library and whether
> you would be interested in owning additions to the library of constants in
> your area of expertise.
> 
> Next steps for the library include:
> 
> * Bringing in other top-level namespaces like disk: or net: and working with
> contributors to fill in the capability strings and symbols.
> * Adding constraints functionality to the library. For instance, building in
> information to the os-capabilities interface that would allow a set of
> capabilities to be cross-checked for set violations. As an example, a
> resource provider having DISK_GB inventory cannot have *both* the disk:ssd
> *and* the disk:hdd capability strings associated with it -- clearly the disk
> storage is either SSD or spinning disk.

Well, if we constrain ourselves to VMs, yes. :)

One of the issues with running ironic behind nova is that there isn't
any way to express that a flavor (or instance) has (or should have)
multiple physical disks. It would certainly be possible to boot a
baremetal machine that does have SSD and spinning rust.

I don't have a solution in mind here, just wanted to point out that we
need to keep more than VMs in mind when talking about capabilities. :)

// jim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team

2016-08-04 Thread Sean McGinnis
I have zero say here, but just want to say Tom is always a great guy to
have around and knows his stuff. +1 from me that he should be core.

On Wed, Aug 03, 2016 at 02:42:04PM -0400, Ben Swartzlander wrote:
> Tom (tbarron on IRC) has been working on OpenStack (both cinder and
> manila) for more than 2 years and has spent a great deal of time on
> Manila reviews in the last release. Tom brings another
> package/distro point of view to the community as well as former
> storage vendor experience.
> 
> -Ben Swartzlander
> Manila PTL
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nova compute reports the disk usage

2016-08-04 Thread Chris Friesen

On 08/04/2016 02:42 AM, Tuan Luong wrote:

Hi Chris,

Yes we used qcow image. However I still have something not clear. What I know 
is that:

- disk_available_least = free_disk - disk_over_commit
- disk_over_commit = int(virt_size) - physical_disk_size

Before launching:
- free_disk = 1025
- disk_available_least = 1016

If we use the normal flavor with 600Gb disk and no ephemeral., the result is 
good with both above value subtract 600Gb.
If we use the flavor without disk but 600Gb ephemeral, the result of 
disk_available_least = -184.


Just a thought...did you clear out the _base directory between tests?  Or is 
there maybe an extra backing file sitting there consuming space?


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Reviewing coverage jobs set up

2016-08-04 Thread Jeremy Stanley
On 2016-08-04 09:36:18 -0400 (-0400), Jim Rollenhagen wrote:
[...]
> Nice! FWIW, it's also documented here:
> http://docs.openstack.org/infra/manual/developers.html#automated-testing

I just proposed a note clarifying the situation with URLs for merge
commits too, since that seems to have been previously missing:
https://review.openstack.org/351199
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Improving Swift deployments with TripleO

2016-08-04 Thread Giulio Fidente

On 08/04/2016 01:26 PM, Christian Schwede wrote:

On 04.08.16 10:27, Giulio Fidente wrote:

On 08/02/2016 09:36 PM, Christian Schwede wrote:

Hello everyone,


thanks Christian,


I'd like to improve the Swift deployments done by TripleO. There are a
few problems today when deployed with the current defaults:

1. Adding new nodes (or replacing existing nodes) is not possible,
because the rings are built locally on each host and a new node doesn't
know about the "history" of the rings. Therefore rings might become
different on the nodes, and that results in an unusable state eventually.


one of the ideas for this was to use a tempurl in the undercloud swift
where to upload the rings built by a single overcloud node, not by the
undercloud

so I proposed a new heat resource which would permit us to create a
swift tempurl in the undercloud during the deployment

https://review.openstack.org/#/c/350707/

if we build the rings on the undercloud we can ignore this and use a
mistral action instead, as pointed by Steven

the good thing about building rings in the overcloud is that it doesn't
force us to have a static node mapping for each and every deployment but
it makes hard to cope with heterogeneous environments


That's true. However - we still need to collect the device data from all
the nodes from the undercloud, push it to at least one overcloud mode,
build/update the rings there, push it to the undercloud Swift and use
that on all overcloud nodes. Or not?


sure, let's build on the undercloud, when automated with mistral it 
shouldn't make a big difference for the user



I was also thinking more about the static node mapping and how to avoid
this. Could we add a host alias using the node UUIDs? That would never
change, it's available from the introspection data and therefore could
be used in the rings.

http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_specific_hieradata.html#collecting-the-node-uuid


right, this is the mechanism I wanted to use to proviude per-node disk 
maps, it's how it works for ceph disks as well



2. The rings are only using a single device, and it seems that this is
just a directory and not a mountpoint with a real device. Therefore data
is stored on the root device - even if you have 100TB disk space in the
background. If not fixed manually your root device will run out of space
eventually.


for the disks instead I am thinking to add a create_resources wrapper in
puppet-swift:

https://review.openstack.org/#/c/350790
https://review.openstack.org/#/c/350840/

so that we can pass via hieradata per-node swift::storage::disks maps

we have a mechanism to push per-node hieradata based on the system uuid,
we could extend the tool to capture the nodes (system) uuid and generate
per-node maps


Awesome, thanks Giulio!

I will test that today. So the tool could generate the mapping
automatically, and we don't need to filter puppet facts on the nodes
itself. Nice!   


and we could re-use the same tool to generate the ceph::osds disk maps 
as well :)


--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Reviewing coverage jobs set up

2016-08-04 Thread Jim Rollenhagen
On Thu, Aug 04, 2016 at 09:24:15AM -0400, Doug Hellmann wrote:
> Excerpts from Kiall Mac Innes's message of 2016-08-04 01:27:23 +0100:
> > On 18/07/16 20:14, Jeremy Stanley wrote:
> > > Note that this will only be true if the change's parent commit in
> > > Gerrit was the branch tip at the time it landed. Otherwise (and
> > > quite frequently in fact) you will need to identify the SHA of the
> > > merge commit which was created at the time it merged and use that
> > > instead to find the post job.
> > Without wanting to diverge too much from the topic at hand, I believe this
> > is why those of us who only occasionally want to look at post job output
> > usually just give up! Keeping this in your head for the once every few
> > months it's needed is hard ;)
> > 
> > A change being merged is always (AFAIK) part and parcel with a review
> > being closed, so - why not publish to /post/ (with some
> > level of dir sharding)?
> > 
> > Thanks,
> > Kiall
> > 
> 
> I could never remember the formula for constructing the URL either,
> so I built this to help me: https://pypi.python.org/pypi/git-os-job

Nice! FWIW, it's also documented here:
http://docs.openstack.org/infra/manual/developers.html#automated-testing

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Reviewing coverage jobs set up

2016-08-04 Thread Doug Hellmann
Excerpts from Kiall Mac Innes's message of 2016-08-04 01:27:23 +0100:
> On 18/07/16 20:14, Jeremy Stanley wrote:
> > Note that this will only be true if the change's parent commit in
> > Gerrit was the branch tip at the time it landed. Otherwise (and
> > quite frequently in fact) you will need to identify the SHA of the
> > merge commit which was created at the time it merged and use that
> > instead to find the post job.
> Without wanting to diverge too much from the topic at hand, I believe this
> is why those of us who only occasionally want to look at post job output
> usually just give up! Keeping this in your head for the once every few
> months it's needed is hard ;)
> 
> A change being merged is always (AFAIK) part and parcel with a review
> being closed, so - why not publish to /post/ (with some
> level of dir sharding)?
> 
> Thanks,
> Kiall
> 

I could never remember the formula for constructing the URL either,
so I built this to help me: https://pypi.python.org/pypi/git-os-job

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some thoughts on API microversions

2016-08-04 Thread Andrew Laski


On Thu, Aug 4, 2016, at 08:20 AM, Sean Dague wrote:
> On 08/03/2016 08:54 PM, Andrew Laski wrote:
> > I've brought some of these thoughts up a few times in conversations
> > where the Nova team is trying to decide if a particular change warrants
> > a microversion. I'm sure I've annoyed some people by this point because
> > it wasn't germane to those discussions. So I'll lay this out in it's own
> > thread.
> > 
> > I am a fan of microversions. I think they work wonderfully to express
> > when a resource representation changes, or when different data is
> > required in a request. This allows clients to make the same request
> > across multiple clouds and expect the exact same response format,
> > assuming those clouds support that particular microversion. I also think
> > they work well to express that a new resource is available. However I do
> > think think they have some shortcomings in expressing that a resource
> > has been removed. But in short I think microversions work great for
> > expressing that there have been changes to the structure and format of
> > the API.
> > 
> > I think microversions are being overused as a signal for other types of
> > changes in the API because they are the only tool we have available. The
> > most recent example is a proposal to allow the revert_resize API call to
> > work when a resizing instance ends up in an error state. I consider
> > microversions to be problematic for changes like that because we end up
> > in one of two situations:
> > 
> > 1. The microversion is a signal that the API now supports this action,
> > but users can perform the action at any microversion. What this really
> > indicates is that the deployment being queried has upgraded to a certain
> > point and has a new capability. The structure and format of the API have
> > not changed so an API microversion is the wrong tool here. And the
> > expected use of a microversion, in my opinion, is to demarcate that the
> > API is now different at this particular point.
> > 
> > 2. The microversion is a signal that the API now supports this action,
> > and users are restricted to using it only on or after that microversion.
> > In many cases this is an artificial constraint placed just to satisfy
> > the expectation that the API does not change before the microversion.
> > But the reality is that if the API change was exposed to every
> > microversion it does not affect the ability I lauded above of a client
> > being able to send the same request and receive the same response from
> > disparate clouds. In other words exposing the new action for all
> > microversions does not affect the interoperability story of Nova which
> > is the real use case for microversions. I do recognize that the
> > situation may be more nuanced and constraining the action to specific
> > microversions may be necessary, but that's not always true.
> > 
> > In case 1 above I think we could find a better way to do this. And I
> > don't think we should do case 2, though there may be special cases that
> > warrant it.
> > 
> > As possible alternate signalling methods I would like to propose the
> > following for consideration:
> > 
> > Exposing capabilities that a user is allowed to use. This has been
> > discussed before and there is general agreement that this is something
> > we would like in Nova. Capabilities will programatically inform users
> > that a new action has been added or an existing action can be performed
> > in more cases, like revert_resize. With that in place we can avoid the
> > ambiguous use of microversions to do that. In the meantime I would like
> > the team to consider not using microversions for this case. We have
> > enough of them being added that I think for now we could just wait for
> > the next microversion after a capability is added and document the new
> > capability there.
> 
> The problem with this approach is that the capability add isn't on a
> microversion boundary, as long as we continue to believe that we want to
> support CD deployments this means people can deploy code with the
> behavior change, that's not documented or signaled any way.

The fact that the capability add isn't on a microversion boundary is
exactly my point. There's no need for it to be in many cases. But it
would only apply for capability adds which don't affect the
interoperability of multiple deployments.

The signaling would come from the ability to query the capabilities
listing. A change in what that listing returns indicates a behavior
change.

Another reason I like the above mechanism is that it handles differences
in policy better as well. As much as we say that two clouds with the
same microversions available should accept the same requests and return
the same responses that's not actually true due to policy checks. I know
we discussed removing the ability to modify the response based on policy
so I'm not referring to that. What I mean is that a full action could be
disabled for a user. In this situation 

Re: [openstack-dev] [all] [infra] Reviewing coverage jobs set up

2016-08-04 Thread Jeremy Stanley
On 2016-08-04 01:27:23 +0100 (+0100), Kiall Mac Innes wrote:
> Without wanting to diverge too much from the topic at hand, I believe this
> is why those of us who only occasionally want to look at post job output
> usually just give up! Keeping this in your head for the once every few
> months it's needed is hard ;)
> 
> A change being merged is always (AFAIK) part and parcel with a review
> being closed, so - why not publish to /post/ (with some
> level of dir sharding)?

We've discussed this in the past. It's possible if we switch to
using Gerrit's change-merged events then there could be sufficient
context to run jobs on changes once they merge, as compared to the
raw Git SHAs ref-updated events currently supply for the post
pipeline. Though is that what you really want: coverage of the
change itself rather than coverage of the repository's state at the
point where that change merged?

My guess is that you're expecting some sort of additional magic to
figure out the merge commit associated with the change that merged
if there is one and then have jobs run on that, or else run jobs on
the change in the situation where no merge commit was needed. I
think this would get considerably more complicated, but I'll defer
to someone with deeper knowledge of Zuul's internals on the actual
complexity.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to clone deleted branch, rre we keeping the history of the last commit id of deleted stable branches.

2016-08-04 Thread Jeremy Stanley
On 2016-08-03 19:08:22 -0500 (-0500), Matt Riedemann wrote:
> On 8/3/2016 6:34 PM, Saju M wrote:
> > I want to clone deleted icehouse branch of cinder. I know that,
> > we can't clone it directly. Are we keeping the history of the
> > last commit id of deleted stable branches. Please share the last
> > commit id of icehouse branch of cinder if somebody know it. I
> > need that to check working code of one old feature.
> 
> Clone the repo and checkout the icehouse-eol tag, that will put
> you in a detached head state at which point you can create a
> branch from it.

Or `git checkout -B stable/icehouse icehouse-eol` to automatically
create a local stable/icehouse branch from the icehouse-eol tag.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Thierry Carrez
Thomas Goirand wrote:
> On 08/01/2016 09:39 AM, Thierry Carrez wrote:
>> But if a project is persistently single-vendor after some time and
>> nobody seems interested to join it, the technical value of that project
>> being "in" OpenStack rather than a separate project in the OpenStack
>> ecosystem of projects is limited. It's limited for OpenStack (why
>> provide resources to support a project that is obviously only beneficial
>> to one organization ?), and it's limited to the organization itself (why
>> go through the OpenStack-specific open processes when you could shortcut
>> it with internal tools and meetings ? why accept the oversight of the
>> Technical Committee ?).
> 
> A project can still be useful for everyone with a single vendor
> contributing to it, even after a long period of existence. IMO that's
> not the issue we're trying to solve.

I agree with that -- open source projects can be useful for everyone
even if only a single vendor contributes to it.

But you seem to imply that the only way an open source project can be
useful is if it's developed as an OpenStack project under the OpenStack
Technical Committee governance. I'm not advocating that these projects
should stop or disappear. I'm just saying that if they are very unlikely
to grow a more diverse affiliation in the future, they derive little
value in being developed under the OpenStack Technical Committee
oversight, and would probably be equally useful if developed outside of
OpenStack official projects governance. There are plenty of projects
that are useful to OpenStack that are not developed under the TC
governance (libvirt, Ceph, OpenvSwitch...)

What is the point for a project to submit themselves to the oversight of
a multi-organization Technical Committee if they always will be the
result of the efforts of a single organization ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some thoughts on API microversions

2016-08-04 Thread Sean Dague
On 08/03/2016 08:54 PM, Andrew Laski wrote:
> I've brought some of these thoughts up a few times in conversations
> where the Nova team is trying to decide if a particular change warrants
> a microversion. I'm sure I've annoyed some people by this point because
> it wasn't germane to those discussions. So I'll lay this out in it's own
> thread.
> 
> I am a fan of microversions. I think they work wonderfully to express
> when a resource representation changes, or when different data is
> required in a request. This allows clients to make the same request
> across multiple clouds and expect the exact same response format,
> assuming those clouds support that particular microversion. I also think
> they work well to express that a new resource is available. However I do
> think think they have some shortcomings in expressing that a resource
> has been removed. But in short I think microversions work great for
> expressing that there have been changes to the structure and format of
> the API.
> 
> I think microversions are being overused as a signal for other types of
> changes in the API because they are the only tool we have available. The
> most recent example is a proposal to allow the revert_resize API call to
> work when a resizing instance ends up in an error state. I consider
> microversions to be problematic for changes like that because we end up
> in one of two situations:
> 
> 1. The microversion is a signal that the API now supports this action,
> but users can perform the action at any microversion. What this really
> indicates is that the deployment being queried has upgraded to a certain
> point and has a new capability. The structure and format of the API have
> not changed so an API microversion is the wrong tool here. And the
> expected use of a microversion, in my opinion, is to demarcate that the
> API is now different at this particular point.
> 
> 2. The microversion is a signal that the API now supports this action,
> and users are restricted to using it only on or after that microversion.
> In many cases this is an artificial constraint placed just to satisfy
> the expectation that the API does not change before the microversion.
> But the reality is that if the API change was exposed to every
> microversion it does not affect the ability I lauded above of a client
> being able to send the same request and receive the same response from
> disparate clouds. In other words exposing the new action for all
> microversions does not affect the interoperability story of Nova which
> is the real use case for microversions. I do recognize that the
> situation may be more nuanced and constraining the action to specific
> microversions may be necessary, but that's not always true.
> 
> In case 1 above I think we could find a better way to do this. And I
> don't think we should do case 2, though there may be special cases that
> warrant it.
> 
> As possible alternate signalling methods I would like to propose the
> following for consideration:
> 
> Exposing capabilities that a user is allowed to use. This has been
> discussed before and there is general agreement that this is something
> we would like in Nova. Capabilities will programatically inform users
> that a new action has been added or an existing action can be performed
> in more cases, like revert_resize. With that in place we can avoid the
> ambiguous use of microversions to do that. In the meantime I would like
> the team to consider not using microversions for this case. We have
> enough of them being added that I think for now we could just wait for
> the next microversion after a capability is added and document the new
> capability there.

The problem with this approach is that the capability add isn't on a
microversion boundary, as long as we continue to believe that we want to
support CD deployments this means people can deploy code with the
behavior change, that's not documented or signaled any way.

A microversion is communication of change that is accessible all the way
to the end user (and is currently our only communication channel for
that). There are definitely gray areas here, in which case you are
deciding whether you over communicate (put in a microversion just in
case it turns out to be significant from programming perspective) or
under communicate, bunch things up and figure no one will really mind.

When we have discoverable capabilities infrastructure, we can probably
move some of these to that. But currently we don't have that. And until
we do, version numbers are cheap.

> Secondly we could consider some indicator that exposes how new the code
> in a deployment is. Rather than using microversions as a proxy to
> indicate that a deployment has hit a certain point perhaps there could
> be a header that indicates the date of the last commit in that code.
> That's not an ideal way to implement it but hopefully it makes it clear
> what I'm suggesting. Some marker that a user can use to determine that a
> new behavior 

[openstack-dev] [cinder][qa] Volume attachment intermittently failing

2016-08-04 Thread Guilherme Moro
Hi all,

I'm running on Kilo (by customer demands) and I have a problem where I get
random failures while deploying a stack, the stack is a series of DB
instances, each with their own volume. I can reproduce it in pretty much
every run, usually in a different instance each time.
The problem is that the attached device doesn't show up during the boot
time, but is available; after a rescan of the PCI bus ('echo "1" >
/sys/bus/pci/rescan') the device is detected and functional.
I found some references to it, and it seems that there's a test in tempest
that triggers the bug:

Old e-mail thread discussing the test that triggered the bug:
http://lists.openstack.org/pipermail/openstack-dev/2014-August/043347.html

The actual bug that is marked in the test as being the one blocking the
test:
https://bugs.launchpad.net/tempest/+bug/1205344

So you can get the background, the test is still disabled in the HEAD of
tempest, so I would believe that the bug never got fixed, or someone just
never got around to update the tickets in case, and re-enabling the test.

If anybody has any info, I would appreciate :)

Regards,

Guilherme Moro
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Thomas Goirand
On 07/31/2016 05:59 PM, Fox, Kevin M wrote:
> This sounds good to me.
> 
> What about making it iterative but with a delayed start. Something like:
> 
> There is a grace period of 1 year for projects that newly join the big tent. 
> After which, the following criteria will be evaluated to keep a project in 
> the big tent, evaluated at the end of each OpenStack release cycle to keep 
> the project for the next cycle. The project should not have active cores from 
> one company in the amount greater then 45% of the active core membership. If 
> that number is higher, the project is given notice they are under diverse and 
> have 6 months of remaining in the big tent to show they are attempting to 
> increase diversity by shifting the ratio to a more diverse active core 
> membership. The active core membership percentage by the over represented 
> company, called X%, will be shown to be reduced by 25% or reach 45%, so 
> max(X% * (100%-25%), 45%). If the criteria is met, the project can remain in 
> the big tent and a new cycle will begin. (another notification and 6 months 
> if still out of compliance)
> 
> This should allow projects that are, or become under diverse a path towards 
> working on project membership diversity. It gives projects that are very far 
> out of wack a while to fix it. It basically gives projects over represented:
>  * (80%, 100%] -  gets 18 months to fix it
>  * (60%, 80%] - gets 12 months
>  * (45%, 60%] - gets 6 months
> 
> Thoughts? The numbers should be fairly easy to change to make for different 
> amounts of grace period.
> 
> Thanks,
> Kevin

I strongly disagree with this kind of procedure. A project with a single
vendor can still be a very good addition to the big tent. The tagging
which we already have in place is IMO enough.

Also, rating projects by the % of commits from a single vendor wont cut
it: it's very bad metrics. Let me attempt to take an example to explain
my thoughts.

Let's say company Alice does the biggest majority of patches in project
Bob, which makes her company the top committer by far. If Alice leaves
the project (maybe for personal reasons, or because her company assigned
her to something else), then maybe suddenly, we get the diversity
percentages correct, just because she left and her company's
contribution ratio dropped. Does that makes project Bob healthier just
because she left? I don't think it does. And that's not what our ruleset
should enforce. Our rules should push for more contributions, not less. [1]

If we want to make collaboration going better, it's going to be a social
thing. Attempting to add rules will only make things more complicated
for new projects.

Cheers,

Thomas Goirand (zigo)

[1] I'm not sure I made myself clear here, if I was not, let me know and
I'll attempt to write in a better way.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Improving Swift deployments with TripleO

2016-08-04 Thread Christian Schwede
On 04.08.16 10:27, Giulio Fidente wrote:
> On 08/02/2016 09:36 PM, Christian Schwede wrote:
>> Hello everyone,
> 
> thanks Christian,
> 
>> I'd like to improve the Swift deployments done by TripleO. There are a
>> few problems today when deployed with the current defaults:
>>
>> 1. Adding new nodes (or replacing existing nodes) is not possible,
>> because the rings are built locally on each host and a new node doesn't
>> know about the "history" of the rings. Therefore rings might become
>> different on the nodes, and that results in an unusable state eventually.
> 
> one of the ideas for this was to use a tempurl in the undercloud swift
> where to upload the rings built by a single overcloud node, not by the
> undercloud
> 
> so I proposed a new heat resource which would permit us to create a
> swift tempurl in the undercloud during the deployment
> 
> https://review.openstack.org/#/c/350707/
> 
> if we build the rings on the undercloud we can ignore this and use a
> mistral action instead, as pointed by Steven
> 
> the good thing about building rings in the overcloud is that it doesn't
> force us to have a static node mapping for each and every deployment but
> it makes hard to cope with heterogeneous environments

That's true. However - we still need to collect the device data from all
the nodes from the undercloud, push it to at least one overcloud mode,
build/update the rings there, push it to the undercloud Swift and use
that on all overcloud nodes. Or not?

That leaves some room for new inconsistencies IMO: how do we ensure that
the overcloud node uses the last ring to start with? Also, ring building
has to be limited to one overcloud node, otherwise we might end up with
multiple ringbuilding nodes? How can an operator manually modify the rings?

The tool to build the rings on the undercloud could be further improved
later, for example I'd like to be able to move data to new nodes slowly
over time, and also query existing storage servers about the progress.
Therefore we need some more functionality than currently available in
the ringbuilding part in puppet-swift IMO.

I think if we move this step to the undercloud we could solve a lot of
these challenges in a consistent way. WDYT?

I was also thinking more about the static node mapping and how to avoid
this. Could we add a host alias using the node UUIDs? That would never
change, it's available from the introspection data and therefore could
be used in the rings.

http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/node_specific_hieradata.html#collecting-the-node-uuid

>> 2. The rings are only using a single device, and it seems that this is
>> just a directory and not a mountpoint with a real device. Therefore data
>> is stored on the root device - even if you have 100TB disk space in the
>> background. If not fixed manually your root device will run out of space
>> eventually.
> 
> for the disks instead I am thinking to add a create_resources wrapper in
> puppet-swift:
> 
> https://review.openstack.org/#/c/350790
> https://review.openstack.org/#/c/350840/
>
> so that we can pass via hieradata per-node swift::storage::disks maps
> 
> we have a mechanism to push per-node hieradata based on the system uuid,
> we could extend the tool to capture the nodes (system) uuid and generate
> per-node maps

Awesome, thanks Giulio!

I will test that today. So the tool could generate the mapping
automatically, and we don't need to filter puppet facts on the nodes
itself. Nice!

> then, with the above puppet changes and having the per-node map and the
> rings download url, we could feed them to the templates, replace with an
> environment the rings building implementation and deploy without further
> customizations
> 
> what do you think?

Yes, that sounds like a good plan to me.

I'll continue working on the ringbuilder tool for now and see how I
integrate this into the Mistral actions.

-- Christian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team

2016-08-04 Thread Valeriy Ponomaryov
Yeah, Tom consists of experience. +1

On Thu, Aug 4, 2016 at 12:35 PM, Ramana Raja  wrote:

> +1. Tom's reviews and guidance are helpful
> and spot-on.
>
> -Ramana
>
> On Thursday, August 4, 2016 7:52 AM, Zhongjun (A) 
> wrote:
> > Subject: Re: [openstack-dev] [Manila] Nominate Tom Barron for core
> reviewer team
> >
> >
> >
> > +1 Tom will be a great addition to the core team.
> >
> >
> >
> >
> >
> >
> > 发件人 : Dustin Schoenbrun [mailto:dscho...@redhat.com]
> > 发送时间 : 2016 年 8 月 4 日 4:55
> > 收件人 : OpenStack Development Mailing List (not for usage questions)
> > 主题 : Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer
> team
> >
> >
> >
> >
> >
> > +1
> >
> >
> >
> >
> >
> > Tom will be a marvelous resource for us to learn from!
> >
> >
> >
> >
> >
> > Dustin Schoenbrun
> > OpenStack Quality Engineer
> > Red Hat, Inc.
> > dscho...@redhat.com
> >
> >
> >
> >
> >
> > On Wed, Aug 3, 2016 at 4:19 PM, Knight, Clinton <
> clinton.kni...@netapp.com >
> > wrote:
> >
> >
> > +1
> >
> >
> >
> >
> >
> > Tom will be a great asset for Manila.
> >
> >
> > Clinton
> >
> >
> >
> >
> >
> > _
> > From: Ravi, Goutham < goutham.r...@netapp.com >
> > Sent: Wednesday, August 3, 2016 3:01 PM
> > Subject: Re: [openstack-dev] [Manila] Nominate Tom Barron for core
> reviewer
> > team
> > To: OpenStack Development Mailing List (not for usage questions) <
> > openstack-dev@lists.openstack.org >
> >
> >
> >
> > (Not a core member, so plus 0.02)
> >
> >
> >
> > I’ve learned a ton of things from Tom and continue to do so!
> >
> >
> >
> >
> > From: Rodrigo Barbieri < rodrigo.barbieri2...@gmail.com >
> > Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> <
> > openstack-dev@lists.openstack.org >
> > Date: Wednesday, August 3, 2016 at 2:48 PM
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> > openstack-dev@lists.openstack.org >
> > Subject: Re: [openstack-dev] [Manila] Nominate Tom Barron for core
> reviewer
> > team
> >
> >
> >
> >
> >
> > +1
> >
> > Tom contributes a lot to the Manila project.
> >
> > --
> > Rodrigo Barbieri
> > Computer Scientist
> > OpenStack Manila Core Contributor
> > Federal University of São Carlos
> >
> >
> >
> >
> >
> > On Aug 3, 2016 15:42, "Ben Swartzlander" < b...@swartzlander.org > wrote:
> >
> >
> >
> > Tom (tbarron on IRC) has been working on OpenStack (both cinder and
> manila)
> > for more than 2 years and has spent a great deal of time on Manila
> reviews
> > in the last release. Tom brings another package/distro point of view to
> the
> > community as well as former storage vendor experience.
> >
> > -Ben Swartzlander
> > Manila PTL
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][osic] Accessing the osic VPN

2016-08-04 Thread Paul Bourke
For those starting work on the OSIC cluster, I was having some trouble 
installing the F5 VPN browser plugin, not to mention it's no good if 
wanting to access the cluster remotely.


In the spirit of Kolla here's a Dockerfile I knocked up to run the VPN 
client in a container: https://github.com/brk3/docker-f5fpc


Hope it helps,
-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team

2016-08-04 Thread Ramana Raja
+1. Tom's reviews and guidance are helpful
and spot-on.

-Ramana

On Thursday, August 4, 2016 7:52 AM, Zhongjun (A)  
wrote:
> Subject: Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer 
> team
> 
> 
> 
> +1 Tom will be a great addition to the core team.
> 
> 
> 
> 
> 
> 
> 发件人 : Dustin Schoenbrun [mailto:dscho...@redhat.com]
> 发送时间 : 2016 年 8 月 4 日 4:55
> 收件人 : OpenStack Development Mailing List (not for usage questions)
> 主题 : Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team
> 
> 
> 
> 
> 
> +1
> 
> 
> 
> 
> 
> Tom will be a marvelous resource for us to learn from!
> 
> 
> 
> 
> 
> Dustin Schoenbrun
> OpenStack Quality Engineer
> Red Hat, Inc.
> dscho...@redhat.com
> 
> 
> 
> 
> 
> On Wed, Aug 3, 2016 at 4:19 PM, Knight, Clinton < clinton.kni...@netapp.com >
> wrote:
> 
> 
> +1
> 
> 
> 
> 
> 
> Tom will be a great asset for Manila.
> 
> 
> Clinton
> 
> 
> 
> 
> 
> _
> From: Ravi, Goutham < goutham.r...@netapp.com >
> Sent: Wednesday, August 3, 2016 3:01 PM
> Subject: Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer
> team
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org >
> 
> 
> 
> (Not a core member, so plus 0.02)
> 
> 
> 
> I’ve learned a ton of things from Tom and continue to do so!
> 
> 
> 
> 
> From: Rodrigo Barbieri < rodrigo.barbieri2...@gmail.com >
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org >
> Date: Wednesday, August 3, 2016 at 2:48 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org >
> Subject: Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer
> team
> 
> 
> 
> 
> 
> +1
> 
> Tom contributes a lot to the Manila project.
> 
> --
> Rodrigo Barbieri
> Computer Scientist
> OpenStack Manila Core Contributor
> Federal University of São Carlos
> 
> 
> 
> 
> 
> On Aug 3, 2016 15:42, "Ben Swartzlander" < b...@swartzlander.org > wrote:
> 
> 
> 
> Tom (tbarron on IRC) has been working on OpenStack (both cinder and manila)
> for more than 2 years and has spent a great deal of time on Manila reviews
> in the last release. Tom brings another package/distro point of view to the
> community as well as former storage vendor experience.
> 
> -Ben Swartzlander
> Manila PTL
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [juju charms] How to configure glance charm for specific cnider backend?

2016-08-04 Thread James Page
Hi Andrey

On Wed, 3 Aug 2016 at 14:35 Andrey Pavlov  wrote:

> Instead of adding one more relation to glance, I can relate my charm to
> new relation 'cinder-volume-service'
>

almost


>
> So there is no more changes in the glance charm.
> And additional in my charm will be -
>
> metadata.yaml -
>
>
>
>
> *subordinate: trueprovides:  image-backend:interface: cinderscope:
> container*
>

Almost - the interface type should be something like 'glance-backend'
matching the type of the container scoped interface we'd need to add to the
glance charm.

provides:
  image-backend:
interface: glance-backend
scope: container

The glance charm needs to interrogate the data presented by the subordinate
charm - I'd suggest one of the data items is 'cinder-backend=True|False' in
which case the glance charm can then set the required configuration option
in the glance-api.conf file (instead of doing that via a relation to cinder
as you proposed in your original changes to glance).


> and then I can relate my charm to glance (and my charm will be installed
> in the same container as glance, so I can configure OS)
> *juju add-relation glance:cinder-volume-service
> scaleio-openstack:image-backend*
>
> This option works for me - I don't need to add something to glance config.
> I'm only need to add files to /etc/glance/rootwrap.d/
> and this option allows to do it.
>

I think the experience would be something like:

  *juju add-relation glance:image-backend  scaleio-openstack:image-backend*

based on my feedback above.  A relation to cinder might not be required
at-all if all glance needs to know is 'use cinder' rather than any other
additional data such as the location of the cinder-api services etc..

As you state - the scaleio-openstack charm can install the required filters
to /etc/glance/rootwrap.d - which is great as it moves the scaleio specific
bits into a specific charm for scaleio (which I like), rather than
overloading and increasing the complexity of the core glance charm.


> I've made additional review - https://review.openstack.org/#/c/350565/
>

Looking today -  I think we can refine things via the review if the overall
design intent is agreed here.

Thanks for your work on this feature!

James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [juju charms] How to configure glance charm for specific cnider backend?

2016-08-04 Thread James Page
On Tue, 2 Aug 2016 at 16:54 Andrey Pavlov  wrote:

> James, thank you for your answer.
>

No problem.

I'll file bug to glance - but in current releases glance-charm have to
> do it himself, right?
>

We should be able to add the required bits using an Ubuntu SRU - lets raise
the bug and see exactly what needs to be done, and then we can decide
whether the charm workaround is still required.

I'm not sure that I'm correctly understand your question.
> I suppose that deployment will have glance and cinder on different
> machines.
>

yes - principle charms should typically be deployed in their own units, as
the assume ownership over the filesystem.


> Also there will be one relation between cinder and glance to configure
> glance to store images in cinder.
>

ack

Other steps are optional -
> If cinder is used specific backend that needs additional configuration
> - then it can be done via storage-backend relation (from subordinate
> charm).
> If this backend needs to configure glances' filters or glance's config
> - then it should be done via any subordinate charm to glance (but
> glance doesn't have such relations now).
>

No - we'd need to add a container scoped 'image-backend' relation to
glance, allowing a subordinate to be deployed with glance to install the
required rootwrap configuration, dependencies etc...

You next email covers that - so I'll respond on that there.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Duncan Thomas
On 1 August 2016 at 18:14, Adrian Otto  wrote:

> I am struggling to understand why we would want to remove projects from
> our big tent at all, as long as they are being actively developed under the
> principles of "four opens". It seems to me that working to disqualify such
> projects sends an alarming signal to our ecosystem. The reason we made the
> big tent to begin with was to set a tone of inclusion. This whole
> discussion seems like a step backward. What problem are we trying to solve,
> exactly?
>

Any project existing in the big tent sets a significant barrier (policy,
technical, mindshare) of entry to any competing project that might spring
up. The cost of entry as an individual into a single-vendor project is much
higher in general than a diverse one (back-channel communications,
differences in vision, monoculture, commercial pressures, etc), and so
having a non-diverse project in the big tent reduces the possibilities of a
better replacement appearing.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nova compute reports the disk usage

2016-08-04 Thread Tuan Luong
Hi Chris,

Yes we used qcow image. However I still have something not clear. What I know 
is that:

- disk_available_least = free_disk - disk_over_commit
- disk_over_commit = int(virt_size) - physical_disk_size

Before launching: 
- free_disk = 1025
- disk_available_least = 1016

If we use the normal flavor with 600Gb disk and no ephemeral., the result is 
good with both above value subtract 600Gb.
If we use the flavor without disk but 600Gb ephemeral, the result of 
disk_available_least = -184. The value of free_disk is still good with 425Gb. 
In this case, it seems that the value of disk_over_commit is 609Gb that is 
calculated from virt_size and physical_disk_size. What I think is the virt_size 
here is 600Gb and physical_disk_size is couple of Gb. How can the value 609Gb 
appear? I agree that the backing_file of ephemeral_disk is sparse and 600Gb.

Best,

Tuan
-Original Message-
From: Chris Friesen [mailto:chris.frie...@windriver.com] 
Sent: August 03, 2016 18:34
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] Nova compute reports the disk usage

On 08/03/2016 09:13 AM, Tuan Luong wrote:
> Hi Jay, Thank you for the reply. Actually, I did ask this question to 
> make sure that everything is still going well with disk usage report 
> of Nova. We had the problem related to wrong report (IMHO it is 
> wrong). The situation is
> below:
>
> - We launch an instance with 600Gb of ephemeral disk of flavor without 
> specifying extra ephemeral disk. The total free_disk before launching 
> is 1025Gb, the total disk_available_least is 1016Gb. - After 
> successful launching, what we have is the value of disk_available_disk 
> = -184Gb therefore we could not launch the next instance.
>
> It seems like the value of disk_available_disk decreases 2*600Gb when 
> I think it should be 600Gb. The value of free_disk after launching is 
> calculated well. I would not believe that the disk_over_commit in this case 
> is 600Gb.
> Otherwise, we also check the instance that indeed it has only 600Gb 
> mounted as vdb.

Assuming you're using qcow, the maximum disk space consumed by a qcow disk is 
"size of backing file" + "size of differences from backing file".  Thus, if you 
have a single instance using a given backing file, the worst-case consumption 
is actually twice the size of your disk.

I think the backing file for an ephemeral disk is actually a sparse file, but 
on a cursory examination it will appear to be consuming 600GB.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Improving Swift deployments with TripleO

2016-08-04 Thread Giulio Fidente

On 08/02/2016 09:36 PM, Christian Schwede wrote:

Hello everyone,


thanks Christian,


I'd like to improve the Swift deployments done by TripleO. There are a
few problems today when deployed with the current defaults:

1. Adding new nodes (or replacing existing nodes) is not possible,
because the rings are built locally on each host and a new node doesn't
know about the "history" of the rings. Therefore rings might become
different on the nodes, and that results in an unusable state eventually.


one of the ideas for this was to use a tempurl in the undercloud swift 
where to upload the rings built by a single overcloud node, not by the 
undercloud


so I proposed a new heat resource which would permit us to create a 
swift tempurl in the undercloud during the deployment


https://review.openstack.org/#/c/350707/

if we build the rings on the undercloud we can ignore this and use a 
mistral action instead, as pointed by Steven


the good thing about building rings in the overcloud is that it doesn't 
force us to have a static node mapping for each and every deployment but 
it makes hard to cope with heterogeneous environments



2. The rings are only using a single device, and it seems that this is
just a directory and not a mountpoint with a real device. Therefore data
is stored on the root device - even if you have 100TB disk space in the
background. If not fixed manually your root device will run out of space
eventually.


for the disks instead I am thinking to add a create_resources wrapper in 
puppet-swift:


https://review.openstack.org/#/c/350790
https://review.openstack.org/#/c/350840/

so that we can pass via hieradata per-node swift::storage::disks maps

we have a mechanism to push per-node hieradata based on the system uuid, 
we could extend the tool to capture the nodes (system) uuid and generate 
per-node maps


then, with the above puppet changes and having the per-node map and the 
rings download url, we could feed them to the templates, replace with an 
environment the rings building implementation and deploy without further 
customizations


what do you think?
--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Adding opensuse as new driver to Magnum

2016-08-04 Thread Michal Jura

Hi,

I would like to put for discussion adding new driver to Magnum. We would 
like to propose opensuse with kubernetes as driver. I did some initial 
work in bug


https://launchpad.net/bugs/1600197
and changes
https://review.openstack.org/339480
https://review.openstack.org/349994

I've got also some comments from you about how this should be proceed.

As maintainer for this change I can propose myself.

I have couple question about moving this driver to /contrib directory. 
If I will do this how this driver should be installed from there?


Thank you for all answers and help with doing this,

Best regards,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Thomas Goirand
On 08/01/2016 09:39 AM, Thierry Carrez wrote:
> But if a project is persistently single-vendor after some time and
> nobody seems interested to join it, the technical value of that project
> being "in" OpenStack rather than a separate project in the OpenStack
> ecosystem of projects is limited. It's limited for OpenStack (why
> provide resources to support a project that is obviously only beneficial
> to one organization ?), and it's limited to the organization itself (why
> go through the OpenStack-specific open processes when you could shortcut
> it with internal tools and meetings ? why accept the oversight of the
> Technical Committee ?).

A project can still be useful for everyone with a single vendor
contributing to it, even after a long period of existence. IMO that's
not the issue we're trying to solve.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-04 Thread Thierry Carrez
Devdatta Kulkarni wrote:
> As current PTL of one of the projects that has the team:single-vendor tag,
> I have following thoughts/questions on this issue.

In preamble I'd like to reiterate that the proposal is not on the table
at this stage -- this is just a discussion to see whether it would be a
good thing or a bad thing.

> - Is the need for periodically deciding membership in the big tent primarily 
> stemming
> from the question of managing resources (for the future design summits and 
> cross-project work)?

No, it's not the primary reason. As I explained elsewhere in the thread,
it's more that (from an upstream open source project perspective)
OpenStack is not a useful vehicle for open source projects that are and
will always be driven by a single vendor. The value we provide (through
our governance, principles and infra systems) is in enabling open
collaboration between organizations. A project that will clearly always
stay single-vendor (for one reason or another) does not get or bring
extra technical value by being developed within "OpenStack" (i.e. under
the Technical Committee oversight).

> If so, have we thought about alternative solutions such, say, using the 
> team:diverse-affiliation
> tag for making such decisions? For instance, we could say that a project will 
> get
> space at the design summit only if it has the team:diverse-affiliation tag? 
> That way, projects
> are incentivized to purse this tag/goal if they want to participate in the 
> design summit.
> Also, adding/removing tag might be simpler than trying to get into big tent 
> again (say, after a project
> has been removed and then gains diverse affiliation afterwards and wants to 
> participate in the
> design summit, would they have to go through big tent application again?).

Actually this is being considered for the Project Teams Gathering
events: we may not provide space for single-vendor projects (since the
value for contributors from one single organization to travel to a
remote location to have a team meeting is limited). Final decision will
be taken based on space availability at the chosen venue.

> - Another issue with using the number of vendors as a metric 
> to decide membership in big tent is that it rules out any project which may 
> be 
> independently started in the open (not by any specific vendor, but by a team 
> of independent contributors),
> and which is following the 4 opens, to be a part of the big tent.

The main issue I now see with this idea is that you REALLY don't want to
flip-flop between in and out based on reaching 89% or 91% on an abstract
metric. Which is why I'd suggest 18 months at single-vendor should only
trigger a *review* by the TC of the affected project. That review would
assess if there is any significant chance that the project grows a
diverse contributor base in the future (and if there is, the project
should stay in).

I expect we'll see two cases in those reviews. On one hand, smallish
projects that struggle to attract contributors or grow diversity, but
are trying and have nothing structural in them discouraging
contributions from other organizations. Those should definitely stay in.
On the other hand, projects that are clearly part of the marketing
strategy of one particular vendor, and the project is not generally as
useful to the rest of the community, and I'd advocate that those should
not stay under TC governance as an official OpenStack project.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Stepping Down.

2016-08-04 Thread Thierry Carrez
Morgan Fainberg wrote:
> Based upon my personal time demands among a number of other reasons I
> will be stepping down from the Technical Committee. This is planned to
> take effect with the next TC election so that my seat will be up to be
> filled at that time.
> 
> For those who elected me in, thank you.

Sorry to see you go, Morgan. Stepping down gracefully when our
priorities no longer align with the positions we hold is one of the
principles OpenStack is based on, and you demonstrated it well. Thanks
for your input and vision!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Watcher] Promote Alexchadin to the core team.

2016-08-04 Thread Vincent FRANÇOISE
Alexander Chadin did prove he knew the various moving parts of Watcher
by participating the numerous design discussions and blueprints during
the last months which makes him a precious asset for our team.

So that's a definite +1 for me as well.


On 04/08/2016 09:11, Jean-Émile DARTOIS wrote:
> Hi all,
> 
> alexchadin has consistently contributed to Watcher for a while. He is
> also very active on IRC. Based on his contribution (e.g:
> continuously-optimization blueprint) and expertise which adds concrete
> value to Watcher, I’d like to promote alexchadin to the core team.
>  
> According to the OpenStack Governance process [1], Please vote +1 or -1
> to the nomination.
> 
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
> 
> Best regards,
> 
> jed56
> 
> 
> Jean-Emile
> DARTOIS
> 
> 
>   
> 
>   
> 
>   
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Watcher] Promote Alexchadin to the core team.

2016-08-04 Thread Jean-Émile DARTOIS
Hi all,

alexchadin has consistently contributed to Watcher for a while. He is also very 
active on IRC. Based on his contribution (e.g: continuously-optimization 
blueprint) and expertise which adds concrete value to Watcher, I'd like to 
promote alexchadin to the core team.

According to the OpenStack Governance process [1], Please vote +1 or -1 to the 
nomination.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,

jed56


Jean-Emile
DARTOIS









__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project mascots update

2016-08-04 Thread Mike Carden
On Thu, Aug 4, 2016 at 4:26 PM, Antoni Segura Puimedon <
toni+openstac...@midokura.com> wrote:

>
> It would be really awesome if, in true OSt and OSS spirit this work
> happened in an OpenStack repository with an open, text based format like
> SVG. This way people could contribute and review.
>
>
I am strongly in favour of images being stored in open formats. Right now
the most widely supported open formats are PNG and SVG. Let's make sure
that as often as possible, we all store non-photographic images in formats
like these.

-- 
MC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project mascots update

2016-08-04 Thread Antoni Segura Puimedon
On Wed, Aug 3, 2016 at 11:00 PM, Heidi Joy Tretheway  wrote:

> *Steve Hardy wrote:*
> *"I have one question regarding the review process for the logos when
> they are drafted? In the cases where projects have their existing community
> generated logos I can imagine there will be a preference to stick with
> something that’s somehow derived from their existing logo…" *
>
> HJ: You’re right, Steve. That’s why every project that had an existing
> community-illustrated logo had first option to keep that mascot in the
> revised logo, and in most cases they chose to do this. So Oslo’s moose,
> Senlin’s forest, Tacker’s squid, and Cloudkitty’s cat (among others) will
> still be the prominent feature in their logo.
>
> *Steve Hardy wrote:*
> *“In cases where a new logo is produced I'm sure community enthusiasm and
> acceptance will be greater if team members have played a part in the logo
> design process or at least provided some feedback prior to the designs
> being declared final?”*
>
> HJ: Absolutely. That’s why we encouraged project teams to work together to
> select their mascot. I received dozens of team etherpads and Condorcet
> polls from PTLs to show how the team decided their mascot candidates. The
> PTLs confirmed their winners for the list you see on
> http://www.openstack.org/project-mascots. You can also see an example of
> an illustration style there, and we expect to have the first five logos
> (with the final illustration style) in hand shortly.
>
> It’s going to be a major effort to complete 50 illustrations x 3 logo
> variations prior to Barcelona, but I think we can make it. That said, it’s
> not possible to do several rounds of revisions with each project team and
> the illustrators. What I’ve been doing instead is listening carefully to
> project team requests and pulling photos to share with the illustrators
> that best show what the teams intend. I’m happy to share that with anyone
> who asks.
>

It would be really awesome if, in true OSt and OSS spirit this work
happened in an OpenStack repository with an open, text based format like
SVG. This way people could contribute and review.


> Heidi Joy
> __
> Heidi Joy Tretheway
> Senior Marketing Manager, OpenStack Foundation
> 503 816 9769  |  skype: heidi.tretheway
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >