[openstack-dev] [nova] The createBackup API

2018-03-29 Thread Alex Xu
There is spec proposal to fix a bug of createBackup API with microversion. (
https://review.openstack.org/#/c/511825/)

When rotation parameter is '0', the createBackup API just do a snapshot,
and then delete all the snapshots. That is meaningless behavier.

But there is thing hope to get wider suggestion. Since we said before all
the nova API should be primitive, the API shouldn't be another wrap of
another API.

So the createBackup sounds like just using the createImage API to create a
snapshot, and upload the snapshot into the glance with index number in the
image name, and rotation the image in after each snapshot.

So it should be something can be done by the client scrips to do same thing
with createImage API.

We have two options here:
#1. fix the bug with a microversion. And we aren't sure any people really
use '0' in the real life. But we use microversion to fix that bug, not sure
it is worth.
#2. deprecate the backup API with a microversion, leave the bug along.
Document that how the user can do that in the client script.

Looking for your comments.

Thanks
Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] About the metric for the routes

2018-03-29 Thread zhang.yanxian
Hi all,

A routing metric is a quantitative value used to evaluate the path cost.
But neutron can't specify a different metric with the same destination address,
which is useful to realize FRR(Fast Reroute) in Telecoms and NFV scenario.

So we are going to introduce a new metric value for the routes.
Any suggestion is welcome, the link is 
here:https://bugs.launchpad.net/neutron/+bug/1759790

Thanks in advance for suggestions.

Best Regards,
yanxian zhang__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-03-29 Thread Matt Riedemann

On 3/29/2018 6:50 PM, Sean McGinnis wrote:

May we can add a "Reimaging" state to the volume? Then Nova could poll for it
to go from that back to Available?


That would be fine with me, and maybe similar to how 'extending' and 
'retyping' work for an attached volume?


Nova wouldn't wait for the volume to go to 'available', we don't want it 
to go to 'available', we'd just wait for it to go back to 'reserved'. 
During a rebuild the instance still needs to keep the volume logically 
attached to it so another instance can't grab it.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project

2018-03-29 Thread Richard Wellum
On Thu, Mar 29, 2018 at 8:14 PM Surya Singh 
wrote:

> Dear All,
>
> Thanks Rich for putting thoughts on continuation with kolla-k8s.
>
>
> On Fri, Mar 30, 2018 at 2:26 AM, Richard Wellum 
> wrote:
> >
> > Hi,
> >
> > So as a current Kolla-Kubernetes Core - I have a slightly different
> opinion than most, I'll try to verbalize it coherently.
> >
> > Lets talk about what Kolla is:
> >
> > Kolla is a project that builds OpenStack docker images, stores them on
> dockerhub, and provides tools to build your own images from your own
> source. Both the images and the tools it provides, are widely used, very
> popular and extremely stable; TripleO, openstack-helm and kolla-ansible to
> name a few are all deployment methods that use Kolla.
> >
> > Kolla has two sub-projects, that both revolve around deployment methods;
> kolla-ansible and kolla-kubernetes. Kolla-ansible is proven, stable and
> used by many in the industry. Part of Kolla's quality is it's rock-solid
> dependability in many scenarios. As Kubernetes took over most of the COE
> world, it's only correct that the Kolla team created this sub-project; if
> swarm became suddenly very popular then we should create a kolla-swarm
> sub-project.
> >
> > So if we abandon kolla-kubernetes ('sunset' seems much more romantic
> admittedly) - we are abandoning the core Kolla team's efforts in this
> space. No matter how good openstack-helm is (and I've deployed it, know a
> lot of the cores and it's truly excellent and well driven), what happens
> down the line if openstack-helm decide to move on from Kolla - say
> focussing on Loci images or a new flavor that comes along? Then Kolla the
> core project, will no longer have any validation of it's docker
> images/containers running on Kubernetes. That to me is the big risk here.
> >
> > The key issue in my opinion is that the core Kolla team has focussed on
> kolla-ansible almost exclusively, and have not migrated to using
> kolla-kubernetes as well. As the code base has stagnated, the gates get
> intro trouble, and new features and configurations added to kolla-ansible
> are not translated to kolla-kubernetes.
> >
> > So I think the real question is not whether we should 'sunset'
> kolla-kubernetes the sub-project, but should we drop Kolla support on
> Kubernetes? Relying on a different team to do so is probably not the
> answer; although it's the one championed in this thread.
>
> +1
>
> >
> > In my opinion we should set some realistic goals before we sunset:
> >
> > 1. Pick a feature set for a Rocky v1.0 release, and commit to trying to
> get there. We have a long list of items, maybe pair this down to something
> reasonable.
>
> I am agree that we should have feature set for Rocky v1.0 release and
> AFAIK community already have that.
>
> > 2. Agreement within Kolla core team to learn kolla-kubernetes and start
> to put a percentage of time into this sub-project.
> > 3. Identify the people who are genuinely interested in working with it
> within the Kolla team.
>
> Though currently I am not the MVP in kolla-k8s but i would love to
> help with some concrete item for v1.0, IMHO before that we need a
> leader then identify volunteers.
> And for that if we need more thought on this
> https://review.openstack.org/#/c/552531
>
> I missed this and will review.

Thanks,

||Rich


> >
> > Without '2' I think sunsetting is the way forward, but the risks should
> be fully understood and hopefully I've made a case for what those are above.
> >
> > Thanks,
> >
> > ||Rich
> >
> >
> > On Wed, Mar 28, 2018 at 1:54 PM Chuck Short  wrote:
> >>
> >> +1
> >>
> >> Regards
> >> chuck
> >> On Wed, Mar 28, 2018 at 11:47 AM, Jeffrey Zhang <
> zhang.lei@gmail.com> wrote:
> >>>
> >>> There are two projects to solve the issue that run OpenStack on
> >>> Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both
> >>> leverage helm tool for orchestration. There is some different
> perspective
> >>> at the beginning, which results in the two teams could not work
> together.
> >>>
> >>> But recently, the difference becomes too small. and there is also no
> active
> >>> contributor in the kolla-kubernetes project.
> >>>
> >>> So I propose to retire kolla-kubernetes project. If you are still
> >>> interested in running OpenStack on kubernetes, please refer to
> >>> openstack-helm project.
> >>>
> >>> --
> >>> Regards,
> >>> Jeffrey Zhang
> >>> Blog: http://xcodest.me
> >>>
> >>>
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> 

Re: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project

2018-03-29 Thread Richard Wellum
Hi Gema,

On Thu, Mar 29, 2018 at 2:48 PM Gema Gomez  wrote:

>
>
> On 29/03/18 18:26, Richard Wellum wrote:
> > Hi,
> >
> > So as a current Kolla-Kubernetes Core - I have a slightly different
> > opinion than most, I'll try to verbalize it coherently.
> >
> > Lets talk about what Kolla is:
> >
> > Kolla is a project that builds OpenStack docker images, stores them on
> > dockerhub, and provides tools to build your own images from your own
> > source. Both the images and the tools it provides, are widely used, very
> > popular and extremely stable; TripleO, openstack-helm and kolla-ansible
> > to name a few are all deployment methods that use Kolla.
> >
> > Kolla has two sub-projects, that both revolve around deployment methods;
> > kolla-ansible and kolla-kubernetes. Kolla-ansible is proven, stable and
> > used by many in the industry. Part of Kolla's quality is it's rock-solid
> > dependability in many scenarios. As Kubernetes took over most of the COE
> > world, it's only correct that the Kolla team created this sub-project;
> > if swarm became suddenly very popular then we should create a
> > kolla-swarm sub-project.
> >
> > So if we abandon kolla-kubernetes ('sunset' seems much more romantic
> > admittedly) - we are abandoning the core Kolla team's efforts in this
> > space. No matter how good openstack-helm is (and I've deployed it, know
> > a lot of the cores and it's truly excellent and well driven), what
> > happens down the line if openstack-helm decide to move on from Kolla -
> > say focussing on Loci images or a new flavor that comes along? Then
> > Kolla the core project, will no longer have any validation of it's
> > docker images/containers running on Kubernetes. That to me is the big
> > risk here.
> >
> > The key issue in my opinion is that the core Kolla team has focussed on
> > kolla-ansible almost exclusively, and have not migrated to using
> > kolla-kubernetes as well. As the code base has stagnated, the gates get
> > intro trouble, and new features and configurations added to
> > kolla-ansible are not translated to kolla-kubernetes.
> >
> > So I think the real question is not whether we should 'sunset'
> > kolla-kubernetes the sub-project, but should we drop Kolla support on
> > Kubernetes? Relying on a different team to do so is probably not the
> > answer; although it's the one championed in this thread.
> >
> > In my opinion we should set some realistic goals before we sunset:
> >
> > 1. Pick a feature set for a Rocky v1.0 release, and commit to trying to
> > get there. We have a long list of items, maybe pair this down to
> > something reasonable.
>
> Are you volunteering to drive this effort forward? I'd be happy to help
> define MVP for Rocky.
>

Yes I am.


>
> > 2. Agreement within Kolla core team to learn kolla-kubernetes and start
> > to put a percentage of time into this sub-project.
>
> Whilst this would be ideal, we cannot really force people that have no
> interest in this sub-project to contribute.
>

That's not what I was inferring, more trying to get a shift in attitude
within the Kolla team. For example, if I am working on kolla-kubernetes,
and make a change that breaks kolla-ansible - the Kolla community would
expect me to fix it of course? Same if I added a feature, and didn't apply
and test it with kolla-ansible then the same no? So I'm just saying that
the same should apply in the other direction; which it is currently not. As
a community we should support both deployment methods. Or if we don't then
we are all agreeing that Kolla will not have support on Kubernetes from the
core Kolla community.


>
> > 3. Identify the people who are genuinely interested in working with it
> > within the Kolla team.
>
> +1 if we find enough contributors to make the reasonable list of items
> happen during Rocky.
>
> > Without '2' I think sunsetting is the way forward, but the risks should
> > be fully understood and hopefully I've made a case for what those are
> above.
>
> How many contributors are necessary to make MVP?
>
We were doing fairly well with 4-5 contributors imo.

Thanks,

||Rich


>
> Cheers,
> Gema
>
> >
> > Thanks,
> >
> > ||Rich
> >
> >
> > On Wed, Mar 28, 2018 at 1:54 PM Chuck Short  > > wrote:
> >
> > +1
> >
> > Regards
> > chuck
> > On Wed, Mar 28, 2018 at 11:47 AM, Jeffrey Zhang
> > > wrote:
> >
> > There are two projects to solve the issue that run OpenStack on
> > Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both
> > leverage helm tool for orchestration. There is some different
> > perspective
> > at the beginning, which results in the two teams could not work
> > together.
> >
> > But recently, the difference becomes too small. and there is
> > also no active
> > contributor in the kolla-kubernetes project.
> >
> > So I propose to 

Re: [openstack-dev] [all][infra] Upcoming changes in ARA Zuul job reports

2018-03-29 Thread Jeffrey Zhang
cool. kolla will try to implement it.

On Fri, Mar 30, 2018 at 7:12 AM, Paul Belanger 
wrote:

> On Thu, Mar 29, 2018 at 06:14:06PM -0400, David Moreau Simard wrote:
> > Hi,
> >
> > By default, all jobs currently benefit from the generation of a static
> > ARA report located in the "ara" directory at the root of the log
> > directory.
> > Due to scalability concerns, these reports were only generated when a
> > job failed and were not available on successful runs.
> >
> > I'm happy to announce that you can expect ARA reports to be available
> > for every job from now on -- including the successful ones !
> >
> > You'll notice a subtle but important change: the report directory will
> > henceforth be named "ara-report" instead of "ara".
> >
> > Instead of generating and saving a HTML report, we'll now only save
> > the ARA database in the "ara-report" directory.
> > This is a special directory from the perspective of the
> > logs.openstack.org server and ARA databases located in such
> > directories will be loaded dynamically by a WSGI middleware.
> >
> > You don't need to do anything to benefit from this change -- it will
> > be pushed to all jobs that inherit from the base job by default.
> >
> > However, if you happen to be using a "nested" installation of ARA and
> > Ansible (i.e, OpenStack-Ansible, Kolla-Ansible, TripleO, etc.), this
> > means that you can also leverage this feature.
> > In order to do that, you'll want to create an "ara-report" directory
> > and copy your ARA database inside before your logs are collected and
> > uploaded.
> >
> I believe this is an important task we should also push on for the
> projects you
> listed above. The main reason to do this is simplify job uploads and
> filesystemd
> demands (thanks clarkb).
>
> Lets see if we can update these projects in the coming week or two!
>
> Great work.
>
> > To help you visualize:
> > /ara-report <-- This is the default Zuul report
> > /logs/ara <-- This wouldn't be loaded dynamically
> > /logs/ara-report <-- This would be loaded dynamically
> > /logs/some/directory/ara-report <-- This would be loaded
> dynamically
> >
> > For more details on this feature of ARA, you can refer to the
> documentation [1].
> >
> > Let me know if you have any questions !
> >
> > [1]: https://ara.readthedocs.io/en/latest/advanced.html
> >
> > David Moreau Simard
> > Senior Software Engineer | OpenStack RDO
> >
> > dmsimard = [irc, github, twitter]
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project

2018-03-29 Thread Surya Singh
Dear All,

Thanks Rich for putting thoughts on continuation with kolla-k8s.


On Fri, Mar 30, 2018 at 2:26 AM, Richard Wellum  wrote:
>
> Hi,
>
> So as a current Kolla-Kubernetes Core - I have a slightly different opinion 
> than most, I'll try to verbalize it coherently.
>
> Lets talk about what Kolla is:
>
> Kolla is a project that builds OpenStack docker images, stores them on 
> dockerhub, and provides tools to build your own images from your own source. 
> Both the images and the tools it provides, are widely used, very popular and 
> extremely stable; TripleO, openstack-helm and kolla-ansible to name a few are 
> all deployment methods that use Kolla.
>
> Kolla has two sub-projects, that both revolve around deployment methods; 
> kolla-ansible and kolla-kubernetes. Kolla-ansible is proven, stable and used 
> by many in the industry. Part of Kolla's quality is it's rock-solid 
> dependability in many scenarios. As Kubernetes took over most of the COE 
> world, it's only correct that the Kolla team created this sub-project; if 
> swarm became suddenly very popular then we should create a kolla-swarm 
> sub-project.
>
> So if we abandon kolla-kubernetes ('sunset' seems much more romantic 
> admittedly) - we are abandoning the core Kolla team's efforts in this space. 
> No matter how good openstack-helm is (and I've deployed it, know a lot of the 
> cores and it's truly excellent and well driven), what happens down the line 
> if openstack-helm decide to move on from Kolla - say focussing on Loci images 
> or a new flavor that comes along? Then Kolla the core project, will no longer 
> have any validation of it's docker images/containers running on Kubernetes. 
> That to me is the big risk here.
>
> The key issue in my opinion is that the core Kolla team has focussed on 
> kolla-ansible almost exclusively, and have not migrated to using 
> kolla-kubernetes as well. As the code base has stagnated, the gates get intro 
> trouble, and new features and configurations added to kolla-ansible are not 
> translated to kolla-kubernetes.
>
> So I think the real question is not whether we should 'sunset' 
> kolla-kubernetes the sub-project, but should we drop Kolla support on 
> Kubernetes? Relying on a different team to do so is probably not the answer; 
> although it's the one championed in this thread.

+1

>
> In my opinion we should set some realistic goals before we sunset:
>
> 1. Pick a feature set for a Rocky v1.0 release, and commit to trying to get 
> there. We have a long list of items, maybe pair this down to something 
> reasonable.

I am agree that we should have feature set for Rocky v1.0 release and
AFAIK community already have that.

> 2. Agreement within Kolla core team to learn kolla-kubernetes and start to 
> put a percentage of time into this sub-project.
> 3. Identify the people who are genuinely interested in working with it within 
> the Kolla team.

Though currently I am not the MVP in kolla-k8s but i would love to
help with some concrete item for v1.0, IMHO before that we need a
leader then identify volunteers.
And for that if we need more thought on this
https://review.openstack.org/#/c/552531

>
> Without '2' I think sunsetting is the way forward, but the risks should be 
> fully understood and hopefully I've made a case for what those are above.
>
> Thanks,
>
> ||Rich
>
>
> On Wed, Mar 28, 2018 at 1:54 PM Chuck Short  wrote:
>>
>> +1
>>
>> Regards
>> chuck
>> On Wed, Mar 28, 2018 at 11:47 AM, Jeffrey Zhang  
>> wrote:
>>>
>>> There are two projects to solve the issue that run OpenStack on
>>> Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both
>>> leverage helm tool for orchestration. There is some different perspective
>>> at the beginning, which results in the two teams could not work together.
>>>
>>> But recently, the difference becomes too small. and there is also no active
>>> contributor in the kolla-kubernetes project.
>>>
>>> So I propose to retire kolla-kubernetes project. If you are still
>>> interested in running OpenStack on kubernetes, please refer to
>>> openstack-helm project.
>>>
>>> --
>>> Regards,
>>> Jeffrey Zhang
>>> Blog: http://xcodest.me
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-03-29 Thread Sean McGinnis
> 
> >
> >Ideally, from my perspective, Nova would take care of the detach/attach 
> >portion
> >and Cinder would only need to take care of imaging the volume.
> 
> Agree. :) And yeah, I pointed this out in the nova spec for volume-backed
> rebuild also. I think nova can basically handle this like it does for shelve
> today, and we'd do something like this:
> 
> 1. disconnect the volume from the host
> 2. create a new empty volume attachment for the volume and instance - this
> is needed so the volume stays 'reserved' while we re-image it
> 3. delete the old volume attachment
> 4. call the new cinder re-image API
> 5. once the volume is available (TODO: how would we know?)

May we can add a "Reimaging" state to the volume? Then Nova could poll for it
to go from that back to Available? Since Nova is driving things, I would be
hesitant to expect and assume that Cinder is appropriately configured to call
back in to Nova.

Or a notification?

Or...?

> 6. re-attach the volume by updating the attachment with the host connector,
> connect on the host, and complete the attachment (marks the volume as in-use
> again)
> 
> -- 
> 
> Thanks,
> 
> Matt
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] All Hail our Newest Release Name - OpenStack Stein

2018-03-29 Thread Zhipeng Huang
In hindsight, it would be much fun the R release named Ramm :P

On Fri, Mar 30, 2018 at 3:10 AM, Paul Belanger 
wrote:

> Hi everybody!
>
> As the subject reads, the "S" release of OpenStack is officially "Stein".
> As
> been with previous elections this wasn't the first choice, that was
> "Solar".
>
> Solar was judged to have legal risk, so as per our name selection process,
> we
> moved to the next name on the list.
>
> Thanks to everybody who participated, and look forward to making OpenStack
> Stein
> a great release.
>
> Paul
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tap-as-a-service] publish on pypi

2018-03-29 Thread Clark Boylan
On Wed, Mar 28, 2018, at 7:59 AM, Takashi Yamamoto wrote:
> hi,
> 
> i'm thinking about publishing the latest release of tap-as-a-service on pypi.
> background: https://review.openstack.org/#/c/555788/
> iirc, the naming (tap-as-a-service vs neutron-taas) was one of concerns
> when we talked about this topic last time. (long time ago. my memory is dim.)
> do you have any ideas or suggestions?
> probably i'll just use "tap-as-a-service" unless anyone has strong opinions.
> because:
> - it's the name we use the most frequently
> - we are not neutron (yet?)

http://git.openstack.org/cgit/openstack/tap-as-a-service/tree/setup.cfg#n2 
shows that tap-as-a-service is the existing package name so probably a good one 
to go with as anyone that already has it installed from source should have pip 
do the right thing when talking to pypi.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] Where did the ARA logs go?

2018-03-29 Thread Clark Boylan
On Wed, Mar 28, 2018, at 8:13 AM, Jeremy Stanley wrote:
> On 2018-03-28 09:26:49 -0500 (-0500), Sean McGinnis wrote:
> [...]
> > I believe the ARA logs are only captured on failing jobs.
> 
> Correct. This was a stop-gap some months ago when we noticed we were
> overrunning our inode capacity on the logserver. ARA was was only
> one of the various contributors to that increased consumption but
> due to its original model based on numerous tiny files, limiting it
> to job failures (where it was most useful) was one of the ways we
> temporarily curtailed inode utilization. ARA has very recently grown
> the ability to stuff all that data into a single sqlite file and
> then handle it browser-side, so I expect we'll be able to switch
> back to collecting it for all job runs again fairly soon.

The switch has been flipped and you should start to see ara reports on all job 
logs again. Thank you dmsimard for making this happen. More details at 
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128902.html

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Upcoming changes in ARA Zuul job reports

2018-03-29 Thread Paul Belanger
On Thu, Mar 29, 2018 at 06:14:06PM -0400, David Moreau Simard wrote:
> Hi,
> 
> By default, all jobs currently benefit from the generation of a static
> ARA report located in the "ara" directory at the root of the log
> directory.
> Due to scalability concerns, these reports were only generated when a
> job failed and were not available on successful runs.
> 
> I'm happy to announce that you can expect ARA reports to be available
> for every job from now on -- including the successful ones !
> 
> You'll notice a subtle but important change: the report directory will
> henceforth be named "ara-report" instead of "ara".
> 
> Instead of generating and saving a HTML report, we'll now only save
> the ARA database in the "ara-report" directory.
> This is a special directory from the perspective of the
> logs.openstack.org server and ARA databases located in such
> directories will be loaded dynamically by a WSGI middleware.
> 
> You don't need to do anything to benefit from this change -- it will
> be pushed to all jobs that inherit from the base job by default.
> 
> However, if you happen to be using a "nested" installation of ARA and
> Ansible (i.e, OpenStack-Ansible, Kolla-Ansible, TripleO, etc.), this
> means that you can also leverage this feature.
> In order to do that, you'll want to create an "ara-report" directory
> and copy your ARA database inside before your logs are collected and
> uploaded.
> 
I believe this is an important task we should also push on for the projects you
listed above. The main reason to do this is simplify job uploads and filesystemd
demands (thanks clarkb).

Lets see if we can update these projects in the coming week or two!

Great work.

> To help you visualize:
> /ara-report <-- This is the default Zuul report
> /logs/ara <-- This wouldn't be loaded dynamically
> /logs/ara-report <-- This would be loaded dynamically
> /logs/some/directory/ara-report <-- This would be loaded dynamically
> 
> For more details on this feature of ARA, you can refer to the documentation 
> [1].
> 
> Let me know if you have any questions !
> 
> [1]: https://ara.readthedocs.io/en/latest/advanced.html
> 
> David Moreau Simard
> Senior Software Engineer | OpenStack RDO
> 
> dmsimard = [irc, github, twitter]
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][infra] Upcoming changes in ARA Zuul job reports

2018-03-29 Thread David Moreau Simard
Hi,

By default, all jobs currently benefit from the generation of a static
ARA report located in the "ara" directory at the root of the log
directory.
Due to scalability concerns, these reports were only generated when a
job failed and were not available on successful runs.

I'm happy to announce that you can expect ARA reports to be available
for every job from now on -- including the successful ones !

You'll notice a subtle but important change: the report directory will
henceforth be named "ara-report" instead of "ara".

Instead of generating and saving a HTML report, we'll now only save
the ARA database in the "ara-report" directory.
This is a special directory from the perspective of the
logs.openstack.org server and ARA databases located in such
directories will be loaded dynamically by a WSGI middleware.

You don't need to do anything to benefit from this change -- it will
be pushed to all jobs that inherit from the base job by default.

However, if you happen to be using a "nested" installation of ARA and
Ansible (i.e, OpenStack-Ansible, Kolla-Ansible, TripleO, etc.), this
means that you can also leverage this feature.
In order to do that, you'll want to create an "ara-report" directory
and copy your ARA database inside before your logs are collected and
uploaded.

To help you visualize:
/ara-report <-- This is the default Zuul report
/logs/ara <-- This wouldn't be loaded dynamically
/logs/ara-report <-- This would be loaded dynamically
/logs/some/directory/ara-report <-- This would be loaded dynamically

For more details on this feature of ARA, you can refer to the documentation [1].

Let me know if you have any questions !

[1]: https://ara.readthedocs.io/en/latest/advanced.html

David Moreau Simard
Senior Software Engineer | OpenStack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] baremetal firmware lifecycle management

2018-03-29 Thread Julia Kreger
One of the topics that came up at during the Ironic sessions at the
Rocky PTG was firmware management.

During this discussion, we quickly reached the consensus that we
lacked the ability to discuss and reach a forward direction without:

* An understanding of capabilities and available vendor mechanisms
that can be used to consistently determine and assert desired firmware
to a baremetal node. Ideally, we could find a commonality of two or
more vendor mechanisms that can be abstracted cleanly into high level
actions. Ideally this would boil down to something a simple as
"list_firmware()" and "set_firmware()". Additionally there are surely
some caveats we need to understand, such as if the firmware update
must be done in a particular state, and if a particular prior
condition or next action is required for the particular update.

* An understanding of several use cases where a deployed node may need
to have specific firmware applied. We are presently aware of two
cases. The first being specific firmware is needed to match an
approved operational profile. The second being a desire to perform
ad-hoc changes or have new versions of firmware asserted while a node
has already been deployed.

Naturally any insight that can be shared will help the community to
best model the interaction so we can determine next steps and
ultimately implementation details.

-Julia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-03-29 Thread Emilien Macchi
Greeting folks,

During the last PTG we spent time discussing some ideas around an
All-In-One installer, using 100% of the TripleO bits to deploy a single
node OpenStack very similar with what we have today with the containerized
undercloud and what we also have with other tools like Packstack or
Devstack.

https://etherpad.openstack.org/p/tripleo-rocky-all-in-one

One of the problems that we're trying to solve here is to give a simple
tool for developers so they can both easily and quickly deploy an OpenStack
for their needs.

"As a developer, I need to deploy OpenStack in a VM on my laptop, quickly
and without complexity, reproducing the same exact same tooling as TripleO
is using."
"As a Neutron developer, I need to develop a feature in Neutron and test it
with TripleO in my local env."
"As a TripleO dev, I need to implement a new service and test its
deployment in my local env."
"As a developer, I need to reproduce a bug in TripleO CI that blocks the
production chain, quickly and simply."

Probably more use cases, but to me that's what came into my mind now.

Dan kicked-off a doc patch a month ago:
https://review.openstack.org/#/c/547038/
And I just went ahead and proposed a blueprint:
https://blueprints.launchpad.net/tripleo/+spec/all-in-one
So hopefully we can start prototyping something during Rocky.

Before talking about the actual implementation, I would like to gather
feedback from people interested by the use-cases. If you recognize yourself
in these use-cases and you're not using TripleO today to test your things
because it's too complex to deploy, we want to hear from you.
I want to see feedback (positive or negative) about this idea. We need to
gather ideas, use cases, needs, before we go design a prototype in Rocky.

Thanks everyone who'll be involved,
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] All Hail our Newest Release Name - OpenStack Stein

2018-03-29 Thread Paul Belanger
Hi everybody!

As the subject reads, the "S" release of OpenStack is officially "Stein". As
been with previous elections this wasn't the first choice, that was "Solar".

Solar was judged to have legal risk, so as per our name selection process, we
moved to the next name on the list.

Thanks to everybody who participated, and look forward to making OpenStack Stein
a great release.

Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project

2018-03-29 Thread Gema Gomez


On 29/03/18 18:26, Richard Wellum wrote:
> Hi,
> 
> So as a current Kolla-Kubernetes Core - I have a slightly different
> opinion than most, I'll try to verbalize it coherently.
> 
> Lets talk about what Kolla is:
> 
> Kolla is a project that builds OpenStack docker images, stores them on
> dockerhub, and provides tools to build your own images from your own
> source. Both the images and the tools it provides, are widely used, very
> popular and extremely stable; TripleO, openstack-helm and kolla-ansible
> to name a few are all deployment methods that use Kolla.
> 
> Kolla has two sub-projects, that both revolve around deployment methods;
> kolla-ansible and kolla-kubernetes. Kolla-ansible is proven, stable and
> used by many in the industry. Part of Kolla's quality is it's rock-solid
> dependability in many scenarios. As Kubernetes took over most of the COE
> world, it's only correct that the Kolla team created this sub-project;
> if swarm became suddenly very popular then we should create a
> kolla-swarm sub-project.
> 
> So if we abandon kolla-kubernetes ('sunset' seems much more romantic
> admittedly) - we are abandoning the core Kolla team's efforts in this
> space. No matter how good openstack-helm is (and I've deployed it, know
> a lot of the cores and it's truly excellent and well driven), what
> happens down the line if openstack-helm decide to move on from Kolla -
> say focussing on Loci images or a new flavor that comes along? Then
> Kolla the core project, will no longer have any validation of it's
> docker images/containers running on Kubernetes. That to me is the big
> risk here.
> 
> The key issue in my opinion is that the core Kolla team has focussed on
> kolla-ansible almost exclusively, and have not migrated to using
> kolla-kubernetes as well. As the code base has stagnated, the gates get
> intro trouble, and new features and configurations added to
> kolla-ansible are not translated to kolla-kubernetes.
> 
> So I think the real question is not whether we should 'sunset'
> kolla-kubernetes the sub-project, but should we drop Kolla support on
> Kubernetes? Relying on a different team to do so is probably not the
> answer; although it's the one championed in this thread.
> 
> In my opinion we should set some realistic goals before we sunset:
> 
> 1. Pick a feature set for a Rocky v1.0 release, and commit to trying to
> get there. We have a long list of items, maybe pair this down to
> something reasonable.

Are you volunteering to drive this effort forward? I'd be happy to help
define MVP for Rocky.

> 2. Agreement within Kolla core team to learn kolla-kubernetes and start
> to put a percentage of time into this sub-project.

Whilst this would be ideal, we cannot really force people that have no
interest in this sub-project to contribute.

> 3. Identify the people who are genuinely interested in working with it
> within the Kolla team.

+1 if we find enough contributors to make the reasonable list of items
happen during Rocky.

> Without '2' I think sunsetting is the way forward, but the risks should
> be fully understood and hopefully I've made a case for what those are above.

How many contributors are necessary to make MVP?

Cheers,
Gema

> 
> Thanks,
> 
> ||Rich
> 
> 
> On Wed, Mar 28, 2018 at 1:54 PM Chuck Short  > wrote:
> 
> +1
> 
> Regards
> chuck
> On Wed, Mar 28, 2018 at 11:47 AM, Jeffrey Zhang
> > wrote:
> 
> There are two projects to solve the issue that run OpenStack on 
> Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both
> leverage helm tool for orchestration. There is some different
> perspective
> at the beginning, which results in the two teams could not work
> together. 
> 
> But recently, the difference becomes too small. and there is
> also no active
> contributor in the kolla-kubernetes project. 
> 
> So I propose to retire kolla-kubernetes project. If you are still
> interested in running OpenStack on kubernetes, please refer to 
> openstack-helm project.
> 
> -- 
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow

2018-03-29 Thread Ed Leafe
On Mar 29, 2018, at 12:57 PM, Eric Fried  wrote:
> 
>> That means that for the (re)-programming scenarios you need to
>> dynamically adjust the inventory of a particular FPGA resource provider.
> 
> Oh, see, this is something I had *thought* was a non-starter. 

I need to work on my communication skills. This is what I’ve been saying all 
along.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow

2018-03-29 Thread Dan Smith
> ==> Fully dynamic: You can program one region with one function, and
> then still program a different region with a different function, etc.

Note that this is also the case if you don't have virtualized multi-slot
devices. Like, if you had one that only has one region. Consuming it
consumes the one and only inventory.

> ==> Single program: Once you program the card with a function, *all* its
> virtual slots are *only* capable of that function until the card is
> reprogrammed.  And while any slot is in use, you can't reprogram.  This
> is Sundar's FPGA use case.  It is also Sylvain's VGPU use case.
>
> The "fully dynamic" case is straightforward (in the sense of being what
> placement was architected to handle).
> * Model the PF/region as a resource provider.
> * The RP has inventory of some generic resource class (e.g. "VGPU",
> "SRIOV_NET_VF", "FPGA_FUNCTION").  Allocations consume that inventory,
> plain and simple.
> * As a region gets programmed dynamically, it's acceptable for the thing
> doing the programming to set a trait indicating that that function is in
> play.  (Sundar, this is the thing I originally said would get
> resistance; but we've agreed it's okay.  No blood was shed :)
> * Requests *may* use preferred traits to help them land on a card that
> already has their function flashed on it. (Prerequisite: preferred
> traits, which can be implemented in placement.  Candidates with the most
> preferred traits get sorted highest.)

Yup.

> The "single program" case needs to be handled more like what Alex
> describes below.  TL;DR: We do *not* support dynamic programming,
> traiting, or inventorying at instance boot time - it all has to be done
> "up front".
> * The PFs can be initially modeled as "empty" resource providers.  Or
> maybe not at all.  Either way, *they can not be deployed* in this state.
> * An operator or admin (via a CLI, config file, agent like blazar or
> cyborg, etc.) preprograms the PF to have the specific desired
> function/configuration.
>   * This may be cyborg/blazar pre-programming devices to maintain an
> available set of each function
>   * This may be in response to a user requesting some function, which
> causes a new image to be laid down on a device so it will be available
> for scheduling
>   * This may be a human doing it at cloud-build time
> * This results in the resource provider being (created and) set up with
> the inventory and traits appropriate to that function.
> * Now deploys can happen, using required traits representing the desired
> function.

...and it could be in response to something noticing that a recent nova
boot failed to find any candidates with a particular function, which
provisions that thing so it can be retried. This is kindof the "spot
instances" approach -- that same workflow would work here as well,
although I expect most people would fit into the above cases.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow

2018-03-29 Thread Eric Fried
> That means that for the (re)-programming scenarios you need to
> dynamically adjust the inventory of a particular FPGA resource provider.

Oh, see, this is something I had *thought* was a non-starter.  This
makes the "single program" case way easier to deal with, and allows it
to be handled on the fly:

* Model your region as a provider with separate resource classes for
each function it supports.  The inventory totals for each would be the
total number of virtual slots (or whatever they're called) of that type
that are possible when the device is flashed with that function.
* An allocation is made for one unit of class X.  This percolates down
to cyborg to do the flashing/attaching.  At this time, cyborg *deletes*
the inventories for all the other resource classes.
* In a race with different resource classes, whoever gets to cyborg
first, wins.  The second one will see that the device is already flashed
with X, and fail.  The failure will bubble up, causing the allocation to
be released.
* Requests for multiple different resource classes at once will have to
filter out allocation candidates that put both on the same device.  Not
completely sure how this happens.  Otherwise they would have to fail at
cyborg, resulting in the same bubble/deallocate as above.

-efried

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2018-03-29 Thread Chris Dent


Greetings OpenStack community,

Chaotic but fun API-SIG meeting today. elmiko has done some review of the 
long-in-progress microversion history doc [7] and reports that it is worth 
finishing and publishing as a historical document explaining why microversions 
exist. Having greater context on the why of microversions should help them feel 
like less of an imposition.

We then discussed what sort of things, if any, the API-SIG should try to talk 
about during the Forum in Vancouver. edleafe will seek out some SDK-related 
people to see if we can share some time together. If you're reading this and 
you have some ideas, please respond and tell us.

Then the long lost mordred returned from the land of zuul to discuss some 
ambiguities in how services, code and configuration use the concept of version. 
I have to admit that my eyes glazed over (with tears) for a moment through 
here, but the outcome was that the best thing to do is not surprise the users 
or make them do extra work. This will be encoded in a followup to the merging 
guidance on SDKs.

elmiko had another report on research he was doing: He thinks he may have 
figured out a way to deal with microversions in OpenAPI documents. I don't want 
to oversell this yet, but if it works this could be combined with the pending 
experiments to make structured data out of existing api-ref documents [8] to 
auto-generate OpenAPI schema from documentation.

Then we looked at some pending guidelines (see below). One that stood out as 
potentialy controversial is the use of service name or service type in errors 
document [9].

As always if you're interested in helping out, in addition to coming to the 
meetings, there's also:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for changes 
over time. If you find something that's not quite right, submit a patch [6] to 
fix it.
* Have you done something for which you think guidance would have made things 
easier but couldn't find any? Submit a patch and help others [6].

# Newly Published Guidelines

* Add guideline on exposing microversions in SDKs
  https://review.openstack.org/#/c/532814

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None this week.

# Guidelines Currently Under Review [3]

* Break up the HTTP guideline into smaller documents
  https://review.openstack.org/#/c/554234/

* Add guidance on needing cache-control headers
  https://review.openstack.org/550468

* Update the errors guidance to use service-type for code
  https://review.openstack.org/#/c/554921/

* Update parameter names in microversion sdk spec
  https://review.openstack.org/#/c/557773/

* Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs that you are 
developing or changing, please address your concerns in an email to the OpenStack 
developer mailing list[1] with the tag "[api]" in the subject. In your email, 
you should include any relevant reviews, links, and comments to help guide the discussion 
of the specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our wiki page 
[4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://bugs.launchpad.net/openstack-api-wg
[6] https://git.openstack.org/cgit/openstack/api-wg
[7] https://review.openstack.org/444892
[8] https://review.openstack.org/#/c/528801/
[9] https://review.openstack.org/#/c/554921/

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Prototyping dedicated roles with unique repositories for Ansible tasks in TripleO

2018-03-29 Thread David Moreau Simard
​Nice!

​
I don't have a strong opinion
​about this but what I might recommend would be to chat with the
openshift-ansible [1] and the kolla-ansible [2] folks.​

​I'm happy to do the introductions if necessary !​

Their models, requirements or context might be different than ours but at
the end of the day, it's a set of Ansible roles and playbooks to install
something.
It would be a good idea just to informally chat about the reasons why their
things are set up the way they are, what are the pros, cons.. or their
challenges.

I'm not saying we should structure our things like theirs.
What I'm trying to say is that they've surely learned a lot over the years
these projects have existed and it's surely worthwhile to chat with them so
we don't repeat some of the same mistakes.

Generally just draw from their experience, learn from their conclusions and
take that into account before committing to any particular model we'd like
to have in TripleO ?

​[1]: https://github.com/openshift/openshift-ansible
[2]: https://github.com/openstack/kolla-ansible​

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

On Thu, Mar 29, 2018, 12:34 PM David Peacock,  wrote:

> Hi everyone,
>
> During the recent PTG in Dublin, it was decided that we'd prototype a way
> forward with Ansible tasks in TripleO that adhere to Ansible best
> practises, creating dedicated roles with unique git repositories and RPM
> packaging per role.
>
> With a view to moving in this direction, a couple of us on the TripleO
> team have begun developing tooling to facilitate this.  Initially we've
> worked on a tool [0] to extract Ansible tasks lists from
> tripleo-heat-templates and move them into new formally structured Ansible
> roles.
>
> An example with the existing keystone docker service [1]:
>
> The upgrade_tasks block will become:
>
> ```
> upgrade_tasks:
>   - import_role:
>   name: tripleo-role-keystone
>   tasks_from: upgrade.yaml
> ```
>
> The fast_forward_upgrade_tasks block will become:
>
> ```
> fast_forward_upgrade_tasks:
>   - import_role:
>   name: tripleo-role-keystone
>   tasks_from: fast_forward_upgrade.yaml
> ```
>
> And this role [2] will be structured:
>
> ```
> tripleo-role-keystone/
> └── tasks
> ├── fast_forward_upgrade.yaml
> ├── main.yaml
> └── upgrade.yaml
> ```
>
> We'd love to hear any feedback from the community as we move towards this.
>
> Thank you,
> David Peacock
>
> [0] https://github.com/davidjpeacock/openstack-role-
> extract/blob/master/role-extractor-creator.py
> [1] https://github.com/openstack/tripleo-heat-templates/blob/
> master/docker/services/keystone.yaml
> [2] https://github.com/davidjpeacock/tripleo-role-keystone
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project

2018-03-29 Thread Richard Wellum
Hi,

So as a current Kolla-Kubernetes Core - I have a slightly different opinion
than most, I'll try to verbalize it coherently.

Lets talk about what Kolla is:

Kolla is a project that builds OpenStack docker images, stores them on
dockerhub, and provides tools to build your own images from your own
source. Both the images and the tools it provides, are widely used, very
popular and extremely stable; TripleO, openstack-helm and kolla-ansible to
name a few are all deployment methods that use Kolla.

Kolla has two sub-projects, that both revolve around deployment methods;
kolla-ansible and kolla-kubernetes. Kolla-ansible is proven, stable and
used by many in the industry. Part of Kolla's quality is it's rock-solid
dependability in many scenarios. As Kubernetes took over most of the COE
world, it's only correct that the Kolla team created this sub-project; if
swarm became suddenly very popular then we should create a kolla-swarm
sub-project.

So if we abandon kolla-kubernetes ('sunset' seems much more romantic
admittedly) - we are abandoning the core Kolla team's efforts in this
space. No matter how good openstack-helm is (and I've deployed it, know a
lot of the cores and it's truly excellent and well driven), what happens
down the line if openstack-helm decide to move on from Kolla - say
focussing on Loci images or a new flavor that comes along? Then Kolla the
core project, will no longer have any validation of it's docker
images/containers running on Kubernetes. That to me is the big risk here.

The key issue in my opinion is that the core Kolla team has focussed on
kolla-ansible almost exclusively, and have not migrated to using
kolla-kubernetes as well. As the code base has stagnated, the gates get
intro trouble, and new features and configurations added to kolla-ansible
are not translated to kolla-kubernetes.

So I think the real question is not whether we should 'sunset'
kolla-kubernetes the sub-project, but should we drop Kolla support on
Kubernetes? Relying on a different team to do so is probably not the
answer; although it's the one championed in this thread.

In my opinion we should set some realistic goals before we sunset:

1. Pick a feature set for a Rocky v1.0 release, and commit to trying to get
there. We have a long list of items, maybe pair this down to something
reasonable.
2. Agreement within Kolla core team to learn kolla-kubernetes and start to
put a percentage of time into this sub-project.
3. Identify the people who are genuinely interested in working with it
within the Kolla team.

Without '2' I think sunsetting is the way forward, but the risks should be
fully understood and hopefully I've made a case for what those are above.

Thanks,

||Rich


On Wed, Mar 28, 2018 at 1:54 PM Chuck Short  wrote:

> +1
>
> Regards
> chuck
> On Wed, Mar 28, 2018 at 11:47 AM, Jeffrey Zhang 
> wrote:
>
>> There are two projects to solve the issue that run OpenStack on
>> Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both
>> leverage helm tool for orchestration. There is some different perspective
>> at the beginning, which results in the two teams could not work together.
>>
>> But recently, the difference becomes too small. and there is also no
>> active
>> contributor in the kolla-kubernetes project.
>>
>> So I propose to retire kolla-kubernetes project. If you are still
>> interested in running OpenStack on kubernetes, please refer to
>> openstack-helm project.
>>
>> --
>> Regards,
>> Jeffrey Zhang
>> Blog: http://xcodest.me
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable] No more stable Phases welcome Extended Maintenance

2018-03-29 Thread Matt Riedemann

On 3/29/2018 3:36 AM, Tony Breeds wrote:

Hi all,
 At Sydney we started the process of change on the stable branches.
Recently we merged a TC resolution[1] to alter the EOL process.  The
next step is refinining the stable policy itself.

I've created a review to do that.  I think it covers most of the points
from Sydney and Dublin.

Please check it out:
   https://review.openstack.org/#/c/552733/

Yours Tony.

[1]https://review.openstack.org/548916


+ops

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-03-29 Thread Matt Riedemann

On 3/29/2018 9:28 AM, Sean McGinnis wrote:

I do not think changing the revert to snapshot implementation is appropriate
here. There may be some cases where this can get the desired result, but there
is no guarantee that there is a snapshot on the volume's base image state to
revert to. It also would not make sense to overload this functionality to
"revert to snapshot if you can, otherwise do all this other stuff instead."



Agree.


This would need to be a new API (microversioned) to add a reimage call. I
wouldn't expect implementation to be too difficult as we already have that
functionality for new volumes. We would just need to figure out the most
appropriate way to take an already in-use volume, detach it, rewrite the image,
then reattach it.


Agree.



Ideally, from my perspective, Nova would take care of the detach/attach portion
and Cinder would only need to take care of imaging the volume.


Agree. :) And yeah, I pointed this out in the nova spec for 
volume-backed rebuild also. I think nova can basically handle this like 
it does for shelve today, and we'd do something like this:


1. disconnect the volume from the host
2. create a new empty volume attachment for the volume and instance - 
this is needed so the volume stays 'reserved' while we re-image it

3. delete the old volume attachment
4. call the new cinder re-image API
5. once the volume is available (TODO: how would we know?)
6. re-attach the volume by updating the attachment with the host 
connector, connect on the host, and complete the attachment (marks the 
volume as in-use again)


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow

2018-03-29 Thread Jay Pipes

On 03/28/2018 07:03 PM, Nadathur, Sundar wrote:
Thanks, Eric. Looks like there are no good solutions even as candidates, 
but only options with varying levels of unacceptability. It is funny 
that that the option that is considered the least unacceptable is to let 
the problem happen and then fail the request (last one in your list).


Could I ask what is the objection to the scheme that applies multiple 
traits and removes one as needed, apart from the fact that it has races?


The fundamental objection that I've had to various discussions that 
involve abusing traits in this fashion is that you are essentially 
trying to "consume" traits. But traits are *not consumable things*. Only 
resource classes are consumable things.


If you want to track the inventory of a certain thing -- and consume 
those things during scheduling -- then you need to use resource classes 
for that thing. The inventory management system in placement already has 
race protections in it. This means that you won't be able to 
over-allocate a particular consumable accelerated function if there 
isn't inventory capacity for that particular function on an FPGA. 
Likewise, you would not be able to *remove* inventory for a particular 
function on an FPGA if some instance is consuming that particular 
function. This protection does *not* exist if you are tracking 
particular functions with traits; the reason is because an instance 
doesn't *consume* a trait. There's no such thing as "I started an 
instance with accelerated function X and therefore I am consuming trait 
Y on this FPGA."


So, bottom line for me is make sure we're using resource classes for 
consumable items and traits for representing non-consumable capabilities 
**of the resource provider**.


That means that for the (re)-programming scenarios you need to 
dynamically adjust the inventory of a particular FPGA resource provider.


You will need to *add* an inventory item of a custom resource class 
representing the specific function you are flashing *to an empty region*.


You *may* want to *delete* an inventory item of a custom resource class 
representing the specific function *when an instance that was using that 
specific function is terminated*. When the instance is terminated, Nova 
will *automatically* delete allocations of that custom resource class 
associated with the instance if you use a custom resource class to 
represent the particular accelerated function. No such automatic removal 
of allocations is done if you use traits to represent particular 
accelerated functions (again, because traits aren't consumable things).


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow

2018-03-29 Thread Eric Fried
We discussed this on IRC [1], hangout, and etherpad [2].  Here is the
summary, which we mostly seem to agree on:

There are two different classes of device we're talking about
modeling/managing.  (We don't know the real nomenclature, so forgive
errors in that regard.)

==> Fully dynamic: You can program one region with one function, and
then still program a different region with a different function, etc.

==> Single program: Once you program the card with a function, *all* its
virtual slots are *only* capable of that function until the card is
reprogrammed.  And while any slot is in use, you can't reprogram.  This
is Sundar's FPGA use case.  It is also Sylvain's VGPU use case.

The "fully dynamic" case is straightforward (in the sense of being what
placement was architected to handle).
* Model the PF/region as a resource provider.
* The RP has inventory of some generic resource class (e.g. "VGPU",
"SRIOV_NET_VF", "FPGA_FUNCTION").  Allocations consume that inventory,
plain and simple.
* As a region gets programmed dynamically, it's acceptable for the thing
doing the programming to set a trait indicating that that function is in
play.  (Sundar, this is the thing I originally said would get
resistance; but we've agreed it's okay.  No blood was shed :)
* Requests *may* use preferred traits to help them land on a card that
already has their function flashed on it. (Prerequisite: preferred
traits, which can be implemented in placement.  Candidates with the most
preferred traits get sorted highest.)

The "single program" case needs to be handled more like what Alex
describes below.  TL;DR: We do *not* support dynamic programming,
traiting, or inventorying at instance boot time - it all has to be done
"up front".
* The PFs can be initially modeled as "empty" resource providers.  Or
maybe not at all.  Either way, *they can not be deployed* in this state.
* An operator or admin (via a CLI, config file, agent like blazar or
cyborg, etc.) preprograms the PF to have the specific desired
function/configuration.
  * This may be cyborg/blazar pre-programming devices to maintain an
available set of each function
  * This may be in response to a user requesting some function, which
causes a new image to be laid down on a device so it will be available
for scheduling
  * This may be a human doing it at cloud-build time
* This results in the resource provider being (created and) set up with
the inventory and traits appropriate to that function.
* Now deploys can happen, using required traits representing the desired
function.

-efried

[1]
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-03-29.log.html#t2018-03-29T12:52:56
[2] https://etherpad.openstack.org/p/placement-dynamic-traiting

On 03/29/2018 07:38 AM, Alex Xu wrote:
> Agree with that, whatever the tweak inventory or traits, none of them works.
> 
> Same as VGPU, we can support pre-programmed mode for multiple-functions
> region, and each region only can support one type function.
> 
> There are two reasons why Cyborg has a filter:
> * records the usage of functions in a region
> * records which function is programmed.
> 
> For #1, each region provider multiple functions. Each function can be
> assigned to a VM. So we should create ResourceProvider for the region. And
> the resource class is function. That is similar to the SR-IOV device.
> The region(The PF)
> provides functions (VFs).
> 
> For #2, We should use trait to distinguish the function type.
> 
> Then we didn't keep any inventory info in the cyborg again, and we
> needn't any filter in cyborg also,
> and there is no race condition anymore.
> 
> 2018-03-29 2:48 GMT+08:00 Eric Fried  >:
> 
> Sundar-
> 
>         We're running across this issue in several places right
> now.   One
> thing that's definitely not going to get traction is
> automatically/implicitly tweaking inventory in one resource class when
> an allocation is made on a different resource class (whether in the same
> or different RPs).
> 
>         Slightly less of a nonstarter, but still likely to get
> significant
> push-back, is the idea of tweaking traits on the fly.  For example, your
> vGPU case might be modeled as:
> 
> PGPU_RP: {
>   inventory: {
>       CUSTOM_VGPU_TYPE_A: 2,
>       CUSTOM_VGPU_TYPE_B: 4,
>   }
>   traits: [
>       CUSTOM_VGPU_TYPE_A_CAPABLE,
>       CUSTOM_VGPU_TYPE_B_CAPABLE,
>   ]
> }
> 
>         The request would come in for
> resources=CUSTOM_VGPU_TYPE_A:1=VGPU_TYPE_A_CAPABLE, resulting
> in an allocation of CUSTOM_VGPU_TYPE_A:1.  Now while you're processing
> that, you would *remove* CUSTOM_VGPU_TYPE_B_CAPABLE from the PGPU_RP.
> So it doesn't matter that there's still inventory of
> CUSTOM_VGPU_TYPE_B:4, because a request including
> required=CUSTOM_VGPU_TYPE_B_CAPABLE won't be satisfied by this RP.
> There's of 

[openstack-dev] [TripleO] Prototyping dedicated roles with unique repositories for Ansible tasks in TripleO

2018-03-29 Thread David Peacock
Hi everyone,

During the recent PTG in Dublin, it was decided that we'd prototype a way
forward with Ansible tasks in TripleO that adhere to Ansible best
practises, creating dedicated roles with unique git repositories and RPM
packaging per role.

With a view to moving in this direction, a couple of us on the TripleO team
have begun developing tooling to facilitate this.  Initially we've worked
on a tool [0] to extract Ansible tasks lists from tripleo-heat-templates
and move them into new formally structured Ansible roles.

An example with the existing keystone docker service [1]:

The upgrade_tasks block will become:

```
upgrade_tasks:
  - import_role:
  name: tripleo-role-keystone
  tasks_from: upgrade.yaml
```

The fast_forward_upgrade_tasks block will become:

```
fast_forward_upgrade_tasks:
  - import_role:
  name: tripleo-role-keystone
  tasks_from: fast_forward_upgrade.yaml
```

And this role [2] will be structured:

```
tripleo-role-keystone/
└── tasks
├── fast_forward_upgrade.yaml
├── main.yaml
└── upgrade.yaml
```

We'd love to hear any feedback from the community as we move towards this.

Thank you,
David Peacock

[0]
https://github.com/davidjpeacock/openstack-role-extract/blob/master/role-extractor-creator.py
[1]
https://github.com/openstack/tripleo-heat-templates/blob/master/docker/services/keystone.yaml
[2] https://github.com/davidjpeacock/tripleo-role-keystone
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-21, April 2-6

2018-03-29 Thread Sean McGinnis
Welcome to our regular release countdown email.

Development Focus
-

Team focus should be on spec approval and implementation for priority features.

General Information
---

We would love to have all the liaisons attend the release team meeting every
Friday[1].

[1] http://eavesdrop.openstack.org/#Release_Team_Meeting

That said, we are skipping this week's meeting due to availability with the
Easter weekend. :)

But we would also like to have PTLs and/or release liaison's in the
#openstack-release channel on milestone and release weeks. Please keep that in
mind and try to linger there during these times if possible. From time to time
it is necessary to track someone down to answer questions or resolve issues. It
would be great to have someone easily pingable.

I also want to make sure everyone is aware of the proposed changes to the
stable policy in support of the "Extended Maintenance" changes [2]. Please take
a look so you are aware of the planned changes, and please chime in if you have
any issues, questions, or concerns about the proposal.

[2] https://review.openstack.org/#/c/552733/

Upcoming Deadlines & Dates
--

Rocky-1 milestone: April 19 (R-19 week)
Forum at OpenStack Summit in Vancouver: May 21-24

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT

2018-03-29 Thread Sean McGinnis
> 
> Neither local nor third-party CI use should be affected.  There's no
> change in behavior based on current usage patterns.  Only the caveat
> that if you introduce an error into LIBS_FROM_GIT (e.g., a misspelled or
> non-existent package name), it will not automatically be caught.
> 
> -Jim

Perfect, thanks Jim.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-03-29 Thread Sean McGinnis
> 
> tl;dr: You don't _have_ to automate this stuff, but it helps.
> 
> sphinx-apidoc generates stub files containing a whole load of autodoc
> directives. As noted above, you can check the output of a sphinx-apidoc 
> run and you'll see just this. If I were to guess, Cinder simply checked
> in the output of such a run [*] into Git, meaning they don't need to
> run it each time. This works but it comes with the downside that your
> docs can and will get out of sync with the actual code when, for
> example, you either add or remove some modules or functions. Running
> sphinx-apidoc on each build, as we've been doing with pbr's autodoc
> feature, ensures this out-of-sync issue doesn't happen, at the expense
> of increased doc build times.
> 
> Stephen
> 
> [*] They might also have handwritten this stuff, but I highly doubt
> that (it's rather tedious to write).
> 

Ah, perfect. This was the motivation I was looking for. I don't think anyone on
the team is aware of this. In this case, switching over to using the new,
dynamically generated way has a lot of benefit.

Looking deeper, there are some custom things being done to generate this
output. So rather than maintaining (or really not maintaining as is the reality
here) this custom code, we should streamline this and be consistent by
following the new approach.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Breathing new life into external Sahara CI

2018-03-29 Thread Evgeny Sikachov
Cool! This is the really good news! I am ready to help

On Mar 29, 2018, 7:16 PM +0400, Jeremy Freudberg , 
wrote:
> I am happy to announce that I have finally acquired two machines to be
> used towards our external CI infrastructure. (Thanks very much to
> Cisco for their generosity!)
>
> Now that we have accomplished the hard part, getting a hardware
> donation, we can finally move on to the next step of actually
> deploying the CI services. I call upon the Sahara community as a whole
> to assist me in this endeavor. We can use the sahara-ci-config repo as
> a starting point, but there are some tweaks to discuss.
>
> Best,
> Jeremy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Breathing new life into external Sahara CI

2018-03-29 Thread Jeremy Freudberg
I am happy to announce that I have finally acquired two machines to be
used towards our external CI infrastructure. (Thanks very much to
Cisco for their generosity!)

Now that we have accomplished the hard part, getting a hardware
donation, we can finally move on to the next step of actually
deploying the CI services. I call upon the Sahara community as a whole
to assist me in this endeavor. We can use the sahara-ci-config repo as
a starting point, but there are some tweaks to discuss.

Best,
Jeremy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT

2018-03-29 Thread James E. Blair
Sean McGinnis  writes:

> On Wed, Mar 28, 2018 at 07:37:19PM -0400, Doug Hellmann wrote:
>> Excerpts from corvus's message of 2018-03-28 13:21:38 -0700:
>> > Hi,
>> > 
>> > I've proposed a change to devstack which slightly alters the
>> > LIBS_FROM_GIT behavior.  This shouldn't be a significant change for
>> > those using legacy devstack jobs (but you may want to be aware of it).
>> > It is more significant for new-style devstack jobs.
>> > 
>> > -snip-
>> > 
>> 
>> How does this apply to uses of devstack outside of zuul, such as in a
>> local development environment?
>> 
>> Doug
>> 
>
> This is my question too. I know in Cinder there are a lot of third party CI
> systems that do not use zuul. If they are impacted in any way by changes to
> devstack, we will need to make sure they are all aware of those changes (and
> have an alternative method for them to get the same functionality).

Neither local nor third-party CI use should be affected.  There's no
change in behavior based on current usage patterns.  Only the caveat
that if you introduce an error into LIBS_FROM_GIT (e.g., a misspelled or
non-existent package name), it will not automatically be caught.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VMware NSX CI - no longer running?

2018-03-29 Thread Matt Riedemann

On 3/29/2018 2:44 AM, Radoslav Gerganov wrote:
While running the VMware CI continues to be a challenge, I must say this 
patch fixes a regression introduced by Matt Riedemann's patch:


https://review.openstack.org/#/c/549411/

for which the VMware CI clearly indicated there was a problem and 
nevertheless the core team submitted it.
Before blaming the CI for not voting enough, the core team should start 
taking into account existing CI votes.
It'd be nice also to include VMware driver maintainers as reviewers when 
making changes to the VMware driver.


Yup, clearly my fault on that one, and I deserve the karmic hit.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-03-29 Thread Sean McGinnis
>   This is the spec [0] about rebuild the volumed backed server.
> The question raised in the spec is about how to bandle the root volume.
> Finally,in Nova team,we think that the cleanest / best solution to this is to
> add a volume action API to cinder for re-imaging the volume.Once that is
> available in a new cinder v3 microversion, nova can use it. The reason I
> ...
>   So Nova team want Cinder to achieve the re-image api.But, I see a spec
> about volume revert by snapshot[1].It is so good for rebuild operation.In
> short,I have two ideas,one is change the volume revert by snapshot spec to
> re-image spec,not only it can let the volume revert by snapshot,but also can
> re-image the volume which the image's size is greater than 0;another idea is
> add a only re-image spec,it only can re-image the volume which the image's
> size is greater than 0.
> 

I do not think changing the revert to snapshot implementation is appropriate
here. There may be some cases where this can get the desired result, but there
is no guarantee that there is a snapshot on the volume's base image state to
revert to. It also would not make sense to overload this functionality to
"revert to snapshot if you can, otherwise do all this other stuff instead."

This would need to be a new API (microversioned) to add a reimage call. I
wouldn't expect implementation to be too difficult as we already have that
functionality for new volumes. We would just need to figure out the most
appropriate way to take an already in-use volume, detach it, rewrite the image,
then reattach it.

Ideally, from my perspective, Nova would take care of the detach/attach portion
and Cinder would only need to take care of imaging the volume.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-03-29 Thread Stephen Finucane
On Thu, 2018-03-29 at 07:47 -0500, Sean McGinnis wrote:
> > > 
> > > It's not mentioned here, but I discovered today that Cinder is using the
> > > sphinx.ext.autodoc module. Is there any issue with using this?
> > > 
> > 
> > Nope - sphinx-apidoc and the likes use autodoc under the hood. You can
> > see this by checking the output in 'contributor/api' or the likes.
> > 
> > Stephen
> > 
> 
> I'm wondering if there is a problem with using this vs. the way being 
> proposed.
> 
> In other words, do we need to switch over to this new sphinxcontrib module, or
> staying with autodoc should be OK. And if so, why not switch current users of
> the pbr method over to use sphinx.ext.autdoc rather than introducing something
> new?

tl;dr: You don't _have_ to automate this stuff, but it helps.

sphinx-apidoc generates stub files containing a whole load of autodoc
directives. As noted above, you can check the output of a sphinx-apidoc 
run and you'll see just this. If I were to guess, Cinder simply checked
in the output of such a run [*] into Git, meaning they don't need to
run it each time. This works but it comes with the downside that your
docs can and will get out of sync with the actual code when, for
example, you either add or remove some modules or functions. Running
sphinx-apidoc on each build, as we've been doing with pbr's autodoc
feature, ensures this out-of-sync issue doesn't happen, at the expense
of increased doc build times.

Stephen

[*] They might also have handwritten this stuff, but I highly doubt
that (it's rather tedious to write).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VMware NSX CI - no longer running?

2018-03-29 Thread Matt Riedemann

On 3/29/2018 7:53 AM, William M Edmonds wrote:
running only on virt/vmwareapi changes would not catch problems caused 
by changes elsewhere, such as compute/manager.py or virt/driver.py


Right, I think virt driver 3rd party CI should run on at least some 
select sub-trees, the major ones that come to mind are:


nova/compute/manager.py
nova/virt/
nova/virt/block_device.py

There are likely others.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VMware NSX CI - no longer running?

2018-03-29 Thread Matt Riedemann

On 3/29/2018 5:19 AM, melanie witt wrote:
Thanks. Just curious, how is the CI passing if the driver is currently 
broken for detach_volume? I had thought maybe particular tests were 
skipped in response to my original email that linked the bug fix patch, 
but it looks like that run was from before I sent the original email.


I had the same question, and it looks like the tests that failed in [1] 
aren't being run in [2].


[1] https://review.openstack.org/#/c/549411/
[2] https://review.openstack.org/#/c/557256/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow

2018-03-29 Thread Eric Fried
Sundar-

To be clear, *all* of the solutions will have race conditions.  There's
no getting around the fact that we need to account for situations where
an allocation is made, but then can't be satisfied by cyborg (or
neutron, or nova, or cinder, or whoever).  That failure has to bubble up
and cause retry or failure of the overarching flow.

The objection to "dynamic trait setting" is that traits are intended to
indicate characteristics, not states.

https://www.google.com/search?q=estar+vs+ser

I'll have to let Jay or Dan explain it further.  Because TBH, I don't
see the harm in mucking with traits/inventories dynamically.

The solutions I discussed here are if it's critical that everything be
dynamic and ultimately flexible.  Alex brings up a different option in
another subthread which is more likely how we're going to handle this
for our Nova scenarios in Rocky.  I'll comment further in that subthread.

-efried

On 03/28/2018 06:03 PM, Nadathur, Sundar wrote:
> Thanks, Eric. Looks like there are no good solutions even as candidates,
> but only options with varying levels of unacceptability. It is funny
> that that the option that is considered the least unacceptable is to let
> the problem happen and then fail the request (last one in your list).
> 
> Could I ask what is the objection to the scheme that applies multiple
> traits and removes one as needed, apart from the fact that it has races?
> 
> Regards,
> Sundar
> 
> On 3/28/2018 11:48 AM, Eric Fried wrote:
>> Sundar-
>>
>> We're running across this issue in several places right now.   One
>> thing that's definitely not going to get traction is
>> automatically/implicitly tweaking inventory in one resource class when
>> an allocation is made on a different resource class (whether in the same
>> or different RPs).
>>
>> Slightly less of a nonstarter, but still likely to get significant
>> push-back, is the idea of tweaking traits on the fly.  For example, your
>> vGPU case might be modeled as:
>>
>> PGPU_RP: {
>>    inventory: {
>>    CUSTOM_VGPU_TYPE_A: 2,
>>    CUSTOM_VGPU_TYPE_B: 4,
>>    }
>>    traits: [
>>    CUSTOM_VGPU_TYPE_A_CAPABLE,
>>    CUSTOM_VGPU_TYPE_B_CAPABLE,
>>    ]
>> }
>>
>> The request would come in for
>> resources=CUSTOM_VGPU_TYPE_A:1=VGPU_TYPE_A_CAPABLE, resulting
>> in an allocation of CUSTOM_VGPU_TYPE_A:1.  Now while you're processing
>> that, you would *remove* CUSTOM_VGPU_TYPE_B_CAPABLE from the PGPU_RP.
>> So it doesn't matter that there's still inventory of
>> CUSTOM_VGPU_TYPE_B:4, because a request including
>> required=CUSTOM_VGPU_TYPE_B_CAPABLE won't be satisfied by this RP.
>> There's of course a window between when the initial allocation is made
>> and when you tweak the trait list.  In that case you'll just have to
>> fail the loser.  This would be like any other failure in e.g. the spawn
>> process; it would bubble up, the allocation would be removed; retries
>> might happen or whatever.
>>
>> Like I said, you're likely to get a lot of resistance to this idea as
>> well.  (Though TBH, I'm not sure how we can stop you beyond -1'ing your
>> patches; there's nothing about placement that disallows it.)
>>
>> The simple-but-inefficient solution is simply that we'd still be able
>> to make allocations for vGPU type B, but you would have to fail right
>> away when it came down to cyborg to attach the resource.  Which is code
>> you pretty much have to write anyway.  It's an improvement if cyborg
>> gets to be involved in the post-get-allocation-candidates
>> weighing/filtering step, because you can do that check at that point to
>> help filter out the candidates that would fail.  Of course there's still
>> a race condition there, but it's no different than for any other
>> resource.
>>
>> efried
>>
>> On 03/28/2018 12:27 PM, Nadathur, Sundar wrote:
>>> Hi Eric and all,
>>>  I should have clarified that this race condition happens only for
>>> the case of devices with multiple functions. There is a prior thread
>>> 
>>>
>>> about it. I was trying to get a solution within Cyborg, but that faces
>>> this race condition as well.
>>>
>>> IIUC, this situation is somewhat similar to the issue with vGPU types
>>> 
>>>
>>> (thanks to Alex Xu for pointing this out). In the latter case, we could
>>> start with an inventory of (vgpu-type-a: 2; vgpu-type-b: 4).  But, after
>>> consuming a unit of  vGPU-type-a, ideally the inventory should change
>>> to: (vgpu-type-a: 1; vgpu-type-b: 0). With multi-function accelerators,
>>> we start with an RP inventory of (region-type-A: 1, function-X: 4). But,
>>> after consuming a unit of that function, ideally the inventory should
>>> change to: (region-type-A: 0, function-X: 3).
>>>
>>> I understand that this approach is controversial 

Re: [openstack-dev] [requirements] Adding objgraph to global requirements

2018-03-29 Thread Doug Hellmann
Excerpts from Renat Akhmerov's message of 2018-03-29 15:33:58 +0700:
> After some discussion in IRC on this topic there was an idea just to write 
> and push upstream needed tools using objgraph w/o having it in the 
> requirements.txt at all. We just need to make sure that those tools are never 
> used during production runs and unit tests (CI will help to verify that). If 
> needed, objgraph can be manually installed used when we need to investigate 
> something.
> 
> If such practice is considered OK and doesn’t violate any OpenStack 
> guidelines then I think this would work, at least in my case.

I don't see any problem with that but I'm also not necessarily opposed
to adding it to the global requirements list so we can use it like other
dependencies.

What sorts of tools are you talking about building?

> 
> Thanks
> 
> Renat Akhmerov
> @Nokia
> 
> On 29 Mar 2018, 15:00 +0700, Renat Akhmerov , wrote:
> > Hi,
> >
> > Can we consider to add objgraph [1] to OpenStack global requirements? I 
> > found this library extremely useful for investigating memory leaks in 
> > Python programs but unfortunately I can’t push upstream any code using it. 
> > It seems to be pretty mature and supports all needed Python versions.
> >
> > Or maybe there’s some alternative already available in the OpenStack 
> > requirements?
> >
> > [1] https://pypi.python.org/pypi/objgraph/3.4.0
> >
> >
> > Thanks
> >
> > Renat Akhmerov
> > @Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT

2018-03-29 Thread Sean McGinnis
On Wed, Mar 28, 2018 at 07:37:19PM -0400, Doug Hellmann wrote:
> Excerpts from corvus's message of 2018-03-28 13:21:38 -0700:
> > Hi,
> > 
> > I've proposed a change to devstack which slightly alters the
> > LIBS_FROM_GIT behavior.  This shouldn't be a significant change for
> > those using legacy devstack jobs (but you may want to be aware of it).
> > It is more significant for new-style devstack jobs.
> > 
> > -snip-
> > 
> 
> How does this apply to uses of devstack outside of zuul, such as in a
> local development environment?
> 
> Doug
> 

This is my question too. I know in Cinder there are a lot of third party CI
systems that do not use zuul. If they are impacted in any way by changes to
devstack, we will need to make sure they are all aware of those changes (and
have an alternative method for them to get the same functionality).

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VMware NSX CI - no longer running?

2018-03-29 Thread William M Edmonds

melanie witt  wrote on 03/29/2018 06:03:26 AM:

> I would like to see the VMware CI running again and it need only run on
> changes under the nova/virt/vmwareapi/ tree, to save on your resources.
> And on our side, I'd like us to add VMware subteam members to VMware
> driver patch reviews (I believe most of the active team members are
> listed on the priorities etherpad [0]) and to be sure we consult VMware
> CI votes when we review.

running only on virt/vmwareapi changes would not catch problems caused by
changes elsewhere, such as compute/manager.py or virt/driver.py
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-03-29 Thread Sean McGinnis
> > 
> > It's not mentioned here, but I discovered today that Cinder is using the
> > sphinx.ext.autodoc module. Is there any issue with using this?
> > 
> 
> Nope - sphinx-apidoc and the likes use autodoc under the hood. You can
> see this by checking the output in 'contributor/api' or the likes.
> 
> Stephen
> 

I'm wondering if there is a problem with using this vs. the way being proposed.

In other words, do we need to switch over to this new sphinxcontrib module, or
staying with autodoc should be OK. And if so, why not switch current users of
the pbr method over to use sphinx.ext.autdoc rather than introducing something
new?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][nova] about re-image the volume

2018-03-29 Thread 李杰
Hi,all


  This is the spec [0] about rebuild the volumed backed server.The question 
raised in the spec is about how to bandle the root volume.Finally,in Nova 
team,we think that the cleanest / best solution to this is to add a volume 
action API to cinder for re-imaging the volume.Once that is available in a new 
cinder v3 microversion, nova can use it. The reason I think this should be done 
in Cinder with re-imaging the volume there is (1) it's cleaner from the nova 
side and (2) then Cinder is in control of how that re-image should happen, 
along with any details it needs to update, e.g. the volume's 
"volume_image_metadata" information would need to be updated.We really aren't 
suitable to do the volume create/delete/swap orchestration thing since that 
entails issues with the volume type being gone, going over quota, what to do 
about deleting the old volume, etc.
  So Nova team want Cinder to achieve the re-image api.But, I see a spec 
about volume revert by snapshot[1].It is so good for rebuild operation.In 
short,I have two ideas,one is change the volume revert by snapshot spec to 
re-image spec,not only it can let the volume revert by snapshot,but also can 
re-image the volume which the image's size is greater than 0;another idea is 
add a only re-image spec,it only can re-image the volume which the image's size 
is greater than 0.
  What do you think of the two ideas?Any suggestion is welcome.Thank you!
  Note:the instance snapshot for image backed server's image size is 
greater than 0,but the volume backed server 's image size is equal 0.
  Re:
  [0]https://review.openstack.org/#/c/532407/
  
[1]https://specs.openstack.org/openstack/cinder-specs/specs/pike/cinder-volume-revert-by-snapshot.html














Best Regards
Rambo__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow

2018-03-29 Thread Alex Xu
Agree with that, whatever the tweak inventory or traits, none of them works.

Same as VGPU, we can support pre-programmed mode for multiple-functions
region, and each region only can support one type function.

There are two reasons why Cyborg has a filter:
* records the usage of functions in a region
* records which function is programmed.

For #1, each region provider multiple functions. Each function can be
assigned to a VM. So we should create ResourceProvider for the region. And
the resource class is function. That is similar to the SR-IOV device. The
region(The PF)
provides functions (VFs).

For #2, We should use trait to distinguish the function type.

Then we didn't keep any inventory info in the cyborg again, and we needn't
any filter in cyborg also,
and there is no race condition anymore.

2018-03-29 2:48 GMT+08:00 Eric Fried :

> Sundar-
>
> We're running across this issue in several places right now.   One
> thing that's definitely not going to get traction is
> automatically/implicitly tweaking inventory in one resource class when
> an allocation is made on a different resource class (whether in the same
> or different RPs).
>
> Slightly less of a nonstarter, but still likely to get significant
> push-back, is the idea of tweaking traits on the fly.  For example, your
> vGPU case might be modeled as:
>
> PGPU_RP: {
>   inventory: {
>   CUSTOM_VGPU_TYPE_A: 2,
>   CUSTOM_VGPU_TYPE_B: 4,
>   }
>   traits: [
>   CUSTOM_VGPU_TYPE_A_CAPABLE,
>   CUSTOM_VGPU_TYPE_B_CAPABLE,
>   ]
> }
>
> The request would come in for
> resources=CUSTOM_VGPU_TYPE_A:1=VGPU_TYPE_A_CAPABLE, resulting
> in an allocation of CUSTOM_VGPU_TYPE_A:1.  Now while you're processing
> that, you would *remove* CUSTOM_VGPU_TYPE_B_CAPABLE from the PGPU_RP.
> So it doesn't matter that there's still inventory of
> CUSTOM_VGPU_TYPE_B:4, because a request including
> required=CUSTOM_VGPU_TYPE_B_CAPABLE won't be satisfied by this RP.
> There's of course a window between when the initial allocation is made
> and when you tweak the trait list.  In that case you'll just have to
> fail the loser.  This would be like any other failure in e.g. the spawn
> process; it would bubble up, the allocation would be removed; retries
> might happen or whatever.
>
> Like I said, you're likely to get a lot of resistance to this idea
> as
> well.  (Though TBH, I'm not sure how we can stop you beyond -1'ing your
> patches; there's nothing about placement that disallows it.)
>
> The simple-but-inefficient solution is simply that we'd still be
> able
> to make allocations for vGPU type B, but you would have to fail right
> away when it came down to cyborg to attach the resource.  Which is code
> you pretty much have to write anyway.  It's an improvement if cyborg
> gets to be involved in the post-get-allocation-candidates
> weighing/filtering step, because you can do that check at that point to
> help filter out the candidates that would fail.  Of course there's still
> a race condition there, but it's no different than for any other resource.
>
> efried
>
> On 03/28/2018 12:27 PM, Nadathur, Sundar wrote:
> > Hi Eric and all,
> > I should have clarified that this race condition happens only for
> > the case of devices with multiple functions. There is a prior thread
> >  March/127882.html>
> > about it. I was trying to get a solution within Cyborg, but that faces
> > this race condition as well.
> >
> > IIUC, this situation is somewhat similar to the issue with vGPU types
> >  %23openstack-nova.2018-03-27.log.html#t2018-03-27T13:41:00>
> > (thanks to Alex Xu for pointing this out). In the latter case, we could
> > start with an inventory of (vgpu-type-a: 2; vgpu-type-b: 4).  But, after
> > consuming a unit of  vGPU-type-a, ideally the inventory should change
> > to: (vgpu-type-a: 1; vgpu-type-b: 0). With multi-function accelerators,
> > we start with an RP inventory of (region-type-A: 1, function-X: 4). But,
> > after consuming a unit of that function, ideally the inventory should
> > change to: (region-type-A: 0, function-X: 3).
> >
> > I understand that this approach is controversial :) Also, one difference
> > from the vGPU case is that the number and count of vGPU types is static,
> > whereas with FPGAs, one could reprogram it to result in more or fewer
> > functions. That said, we could hopefully keep this analogy in mind for
> > future discussions.
> >
> > We probably will not support multi-function accelerators in Rocky. This
> > discussion is for the longer term.
> >
> > Regards,
> > Sundar
> >
> > On 3/23/2018 12:44 PM, Eric Fried wrote:
> >> Sundar-
> >>
> >>  First thought is to simplify by NOT keeping inventory information
> in
> >> the cyborg db at all.  The provider record in the placement service
> >> already knows the device (the provider ID, 

Re: [openstack-dev] [Vitrage] New proposal for analysis.

2018-03-29 Thread MinWookKim
Hello Ifat,
 
Thanks for your reply.  : )


I wrote my opinion on your comment.
 
Why do you think the request should pass through the Vitrage API? Why can’t
vitrage-dashboard call the check component directly?
 
Authentication issues:
I think the check component is a separate component based on the API.
 
In my opinion, if the check component has a separate api address from the
vitrage to receive requests from the Vitrage-dashboard, 
the Vitrage-dashboard needs to know the api address for the check
component.
 
This can result in a request / response situation open to anyone,
regardless of the authentication supported 
by openstack between the Vitrage-dashboard and the request / response
procedure of check component.
 
This is possible not only through the Vitrage-dashboard, but also with
simple commands such as curl. 
(I think it is unnecessary to implement a separate authentication system
for the check component.)
 
This problem may occur if someone knows the api address for the check
component, 
which can cause the host and VM to execute system commands.
 
what should happen if the user closes the check window before the checks
are over? I assume that the checks will finish, but the user won’t be able
to see the results?
 
If the window is closed before the check is finished, the user can not
check the result.
 
To solve this problem, I think that temporarily saving a list of recent
results is also a solution.
 
By storing temporary lists (for example, up to 10), the user can see the
previous results and think that it is also possible to empty the list by the
user.
 
how is it?
 
Thank you.
 
Best Regrads,
Minwook.
 
From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com] 
Sent: Thursday, March 29, 2018 8:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis.
 
Hi Minwook,
 
Why do you think the request should pass through the Vitrage API? Why can’t
vitrage-dashboard call the check component directly?
 
And another question: what should happen if the user closes the check
window before the checks are over? I assume that the checks will finish, but
the user won’t be able to see the results?
 
Thanks,
Ifat.
 
From: MinWookKim <  delightw...@ssu.ac.kr>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <

openstack-dev@lists.openstack.org>
Date: Thursday, 29 March 2018 at 10:25
To: "'OpenStack Development Mailing List (not for usage questions)'" <

openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis.
 
Hello Ifat and Vitrage team.
 
I would like to explain more about the implementation part of the mail I
sent last time.
 
The flow is as follows.
 
Vitrage-dashboard (action-list-panel) -> Vitrage-api -> check component
 
The last time I mentioned it as api-handler, it would be better to call the
check component directly from Vitarge-api without having to use it.
 
I hope this helps you understand.
 
Thank you
 
Best Regards,
Minwook. 
 
From: MinWookKim [ 
mailto:delightw...@ssu.ac.kr] 
Sent: Wednesday, March 28, 2018 11:21 AM
To: 'OpenStack Development Mailing List (not for usage questions)'
Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis.
 
Hello Ifat, 
 
Thanks for your reply. : )
 
This proposal is a proposal that we expect to be useful from a user
perspective.

>From a manager's point of view, we need an implementation that minimizes
the overhead incurred by the proposal.

The answers to some of your questions are:
 
  I assume that these checks will not be implemented in Vitrage, and
the results will not be stored in Vitrage, right? Vitrage role is to be a
place where it is easy and intuitive for the user to execute external
actions/checks.
 
Yes, that's right. We do not need to save it to Vitrage because we just
need to check the results. 
However, it is possible to implement the function directly in
Vitrage-dashboard separately from Vitrage like add-action-list panel, 
but it seems that it is not enough to implement all the functions. 
If you do not mind, we will have the following flow.

1. The user requests the check action from the vitrage-dashboard
(add-action-list-panel).
2. Call the check component through the vitrage's API handler.
3. The check component executes the command and returns the result.
 
Because it is my opinion only, please tell us if there is an unnecessary
part. :)
 
  Do you expect the user to click an entity, select an action to run
(e.g. ‘P2P check’), and wait by the open panel for the results? What if the
user switches to another menu before the check is done? What if the user
asks to run an additional check in parallel? What if the user wants to see
again a previous result?
 
My idea was to select the task, wait for the results in an open panel, and

Re: [openstack-dev] [Vitrage] New proposal for analysis.

2018-03-29 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi Minwook,

Why do you think the request should pass through the Vitrage API? Why can’t 
vitrage-dashboard call the check component directly?

And another question: what should happen if the user closes the check window 
before the checks are over? I assume that the checks will finish, but the user 
won’t be able to see the results?

Thanks,
Ifat.

From: MinWookKim 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 29 March 2018 at 10:25
To: "'OpenStack Development Mailing List (not for usage questions)'" 

Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis.

Hello Ifat and Vitrage team.

I would like to explain more about the implementation part of the mail I sent 
last time.

The flow is as follows.

Vitrage-dashboard (action-list-panel) -> Vitrage-api -> check component

The last time I mentioned it as api-handler, it would be better to call the 
check component directly from Vitarge-api without having to use it.

I hope this helps you understand.

Thank you

Best Regards,
Minwook.

From: MinWookKim [mailto:delightw...@ssu.ac.kr]
Sent: Wednesday, March 28, 2018 11:21 AM
To: 'OpenStack Development Mailing List (not for usage questions)'
Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis.

Hello Ifat,

Thanks for your reply. : )

This proposal is a proposal that we expect to be useful from a user perspective.

From a manager's point of view, we need an implementation that minimizes the 
overhead incurred by the proposal.

The answers to some of your questions are:


•   I assume that these checks will not be implemented in Vitrage, and the 
results will not be stored in Vitrage, right? Vitrage role is to be a place 
where it is easy and intuitive for the user to execute external actions/checks.

Yes, that's right. We do not need to save it to Vitrage because we just need to 
check the results.
However, it is possible to implement the function directly in Vitrage-dashboard 
separately from Vitrage like add-action-list panel,
but it seems that it is not enough to implement all the functions.
If you do not mind, we will have the following flow.

1. The user requests the check action from the vitrage-dashboard 
(add-action-list-panel).
2. Call the check component through the vitrage's API handler.
3. The check component executes the command and returns the result.

Because it is my opinion only, please tell us if there is an unnecessary part. 
:)


•   Do you expect the user to click an entity, select an action to run 
(e.g. ‘P2P check’), and wait by the open panel for the results? What if the 
user switches to another menu before the check is done? What if the user asks 
to run an additional check in parallel? What if the user wants to see again a 
previous result?


My idea was to select the task, wait for the results in an open panel, and then 
instantly see it in the panel.
If we switch to another menu before the scan is complete, we will not be able 
to see the results.
Parallel checking is a matter of fact. (This can cause excessive overhead.)
For earlier results, it may be okay to temporarily save the open panel until we 
exit the panel. We can see the previous results through the temporary saved 
results.


•   Any thoughts of what component will implement those checks? Or maybe 
these will be just scripts?

I think I implement a separate component to request it.


•   It could be nice if, as a result of an action check, a new alarm will 
be raised in Vitrage. A specific alarm with the additional details that were 
found. However, it might not be trivial to implement it. We could think about 
it as phase #2.


It is expected to be really good. It would be very useful if an Entity-Graph 
generates an alarm based on the check result.
I think that part will be able to talk in detail later.
My answer is my opinions and assumptions.
If you think my implementation is wrong, or an inefficient implementation, 
please do not hesitate to tell me.

Thanks.

Best Regards,
Minwook.
From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com]
Sent: Wednesday, March 28, 2018 2:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis.

Hi Minwook,

I think that from a user’s perspective, these are very good ideas.

I have some questions regarding the UX and the implementation, since I’m trying 
to think what could be the best way to execute such actions from Vitrage.


· I assume that these checks will not be implemented in Vitrage, and 
the results will not be stored in Vitrage, right? Vitrage role is to be a place 
where it is easy and intuitive for the user to execute external actions/checks.

· Do you expect the user to click an entity, select an action to run 
(e.g. ‘P2P check’), and wait by the open panel for the results? What if the 

Re: [openstack-dev] [nova] VMware NSX CI - no longer running?

2018-03-29 Thread melanie witt

On Thu, 29 Mar 2018 10:09:09 +, Gary Kotton wrote:

Here is an example where the CI has run on a recent patch - yesterday 
-https://review.openstack.org/557256


Thanks. Just curious, how is the CI passing if the driver is currently 
broken for detach_volume? I had thought maybe particular tests were 
skipped in response to my original email that linked the bug fix patch, 
but it looks like that run was from before I sent the original email.


-melanie




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VMware NSX CI - no longer running?

2018-03-29 Thread Gary Kotton
Hi,
Here is an example where the CI has run on a recent patch - yesterday - 
https://review.openstack.org/557256
Thanks
Gary

On 3/29/18, 1:04 PM, "melanie witt"  wrote:

On Thu, 29 Mar 2018 10:44:52 +0300, Radoslav Gerganov wrote:
> On 28.03.2018 19:07, melanie witt wrote:
>> We were reviewing a bug fix for the vmware driver [0] today and we 
noticed it appears that the VMware NSX CI is no longer running, not even on 
only the nova/virt/vmwareapi/ tree.
>>
>>  From the third-party CI dashboard, I see some claims of it running but 
when I open the patches, I don't see any reporting from VMware NSX CI [1].
>>
>> Can anyone from the vmware subteam comment on whether or not the vmware 
third-party CI is going to be fixed or if it has been abandoned?
>>
> 
> While running the VMware CI continues to be a challenge, I must say this 
patch fixes a regression introduced by Matt Riedemann's patch:
> 
> https://review.openstack.org/#/c/549411/
> 
> for which the VMware CI clearly indicated there was a problem and 
nevertheless the core team submitted it.
> Before blaming the CI for not voting enough, the core team should start 
taking into account existing CI votes.
> It'd be nice also to include VMware driver maintainers as reviewers when 
making changes to the VMware driver.

Thank you for bringing the root cause to our attention and I'm sorry we 
made a mistake that broke the driver. You are right that the VMware CI 
vote should have been taken into consideration and that the VMware 
subteam members should have been added as reviewers on the patch.

It was not my intention to blame the VMware CI for not voting enough. I 
just wanted to know what happened to it and whether or not it is being 
maintained.

I would like to see the VMware CI running again and it need only run on 
changes under the nova/virt/vmwareapi/ tree, to save on your resources. 
And on our side, I'd like us to add VMware subteam members to VMware 
driver patch reviews (I believe most of the active team members are 
listed on the priorities etherpad [0]) and to be sure we consult VMware 
CI votes when we review.

Best,
-melanie

[0] https://etherpad.openstack.org/p/rocky-nova-priorities-tracking L256




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VMware NSX CI - no longer running?

2018-03-29 Thread melanie witt

On Thu, 29 Mar 2018 10:44:52 +0300, Radoslav Gerganov wrote:

On 28.03.2018 19:07, melanie witt wrote:

We were reviewing a bug fix for the vmware driver [0] today and we noticed it 
appears that the VMware NSX CI is no longer running, not even on only the 
nova/virt/vmwareapi/ tree.

 From the third-party CI dashboard, I see some claims of it running but when I 
open the patches, I don't see any reporting from VMware NSX CI [1].

Can anyone from the vmware subteam comment on whether or not the vmware 
third-party CI is going to be fixed or if it has been abandoned?



While running the VMware CI continues to be a challenge, I must say this patch 
fixes a regression introduced by Matt Riedemann's patch:

https://review.openstack.org/#/c/549411/

for which the VMware CI clearly indicated there was a problem and nevertheless 
the core team submitted it.
Before blaming the CI for not voting enough, the core team should start taking 
into account existing CI votes.
It'd be nice also to include VMware driver maintainers as reviewers when making 
changes to the VMware driver.


Thank you for bringing the root cause to our attention and I'm sorry we 
made a mistake that broke the driver. You are right that the VMware CI 
vote should have been taken into consideration and that the VMware 
subteam members should have been added as reviewers on the patch.


It was not my intention to blame the VMware CI for not voting enough. I 
just wanted to know what happened to it and whether or not it is being 
maintained.


I would like to see the VMware CI running again and it need only run on 
changes under the nova/virt/vmwareapi/ tree, to save on your resources. 
And on our side, I'd like us to add VMware subteam members to VMware 
driver patch reviews (I believe most of the active team members are 
listed on the priorities etherpad [0]) and to be sure we consult VMware 
CI votes when we review.


Best,
-melanie

[0] https://etherpad.openstack.org/p/rocky-nova-priorities-tracking L256




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Adding objgraph to global requirements

2018-03-29 Thread Renat Akhmerov
After some discussion in IRC on this topic there was an idea just to write and 
push upstream needed tools using objgraph w/o having it in the requirements.txt 
at all. We just need to make sure that those tools are never used during 
production runs and unit tests (CI will help to verify that). If needed, 
objgraph can be manually installed used when we need to investigate something.

If such practice is considered OK and doesn’t violate any OpenStack guidelines 
then I think this would work, at least in my case.

Thanks

Renat Akhmerov
@Nokia

On 29 Mar 2018, 15:00 +0700, Renat Akhmerov , wrote:
> Hi,
>
> Can we consider to add objgraph [1] to OpenStack global requirements? I found 
> this library extremely useful for investigating memory leaks in Python 
> programs but unfortunately I can’t push upstream any code using it. It seems 
> to be pretty mature and supports all needed Python versions.
>
> Or maybe there’s some alternative already available in the OpenStack 
> requirements?
>
> [1] https://pypi.python.org/pypi/objgraph/3.4.0
>
>
> Thanks
>
> Renat Akhmerov
> @Nokia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][stable] No more stable Phases welcome Extended Maintenance

2018-03-29 Thread Tony Breeds
Hi all,
At Sydney we started the process of change on the stable branches.
Recently we merged a TC resolution[1] to alter the EOL process.  The
next step is refinining the stable policy itself.

I've created a review to do that.  I think it covers most of the points
from Sydney and Dublin.

Please check it out:
  https://review.openstack.org/#/c/552733/

Yours Tony.

[1] https://review.openstack.org/548916


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStackAnsible] Tag repos as newton-eol

2018-03-29 Thread Tony Breeds
On Thu, Mar 15, 2018 at 10:57:58AM +, Jean-Philippe Evrard wrote:
> Looks good to me.

This has been done now.  Thanks for being patient :)

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] stable/queens: How to configure devstack to use openstacksdk===0.11.3 and os-service-types===1.1.0

2018-03-29 Thread Tony Breeds
On Fri, Mar 16, 2018 at 02:29:51PM +, Kwan, Louie wrote:
> In the stable/queens branch, since openstacksdk0.11.3 and 
> os-service-types1.1.0 are described in openstack's upper-constraints.txt, 
> 
> https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L411
> https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L297
> 
> If I do 
> 
> > git clone https://git.openstack.org/openstack-dev/devstack -b stable/queens
> 
> And then stack.sh
> 
> We will see it is using openstacksdk-0.12.0 and os_service_types-1.2.0

Okay that's pretty strange.  I can't think of why you'd be getting the
master version of upper-constraints.txt from the queens branch.

[tony@thor requirements]$ tools/grep-all.sh openstacksdk | grep -E 
'(master|queens)'
origin/master : openstacksdk>=0.11.2  # Apache-2.0
origin/stable/queens  : openstacksdk>=0.9.19  # Apache-2.0
origin/master : openstacksdk===0.12.0
origin/stable/queens  : openstacksdk===0.11.3
[tony@thor requirements]$ tools/grep-all.sh os-service-types | grep -E 
'(master|queens)'
origin/master : os-service-types>=1.2.0  # Apache-2.0
origin/stable/queens  : os-service-types>=1.1.0  # Apache-2.0
origin/master : os-service-types===1.2.0
origin/stable/queens  : os-service-types===1.1.0


I quick eyeball of the code doesn't show anything obvious.

Can you provide the devstack log somewhere?
 
> Having said that, we need the older version, how to configure devstack to use 
> openstacksdk===0.11.3 and os-service-types===1.1.0

We can try to work out why you're getting the wrong versions but what
error/problem do you see with the version from master?

I'd expect some general we need version X of FOO but Y is installed
messages.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements] Adding objgraph to global requirements

2018-03-29 Thread Renat Akhmerov
Hi,

Can we consider to add objgraph [1] to OpenStack global requirements? I found 
this library extremely useful for investigating memory leaks in Python programs 
but unfortunately I can’t push upstream any code using it. It seems to be 
pretty mature and supports all needed Python versions.

Or maybe there’s some alternative already available in the OpenStack 
requirements?

[1] https://pypi.python.org/pypi/objgraph/3.4.0


Thanks

Renat Akhmerov
@Nokia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VMware NSX CI - no longer running?

2018-03-29 Thread Radoslav Gerganov

On 28.03.2018 19:07, melanie witt wrote:

We were reviewing a bug fix for the vmware driver [0] today and we noticed it 
appears that the VMware NSX CI is no longer running, not even on only the 
nova/virt/vmwareapi/ tree.

From the third-party CI dashboard, I see some claims of it running but when I 
open the patches, I don't see any reporting from VMware NSX CI [1].

Can anyone from the vmware subteam comment on whether or not the vmware 
third-party CI is going to be fixed or if it has been abandoned?



While running the VMware CI continues to be a challenge, I must say this patch 
fixes a regression introduced by Matt Riedemann's patch:

https://review.openstack.org/#/c/549411/

for which the VMware CI clearly indicated there was a problem and nevertheless 
the core team submitted it.
Before blaming the CI for not voting enough, the core team should start taking 
into account existing CI votes.
It'd be nice also to include VMware driver maintainers as reviewers when making 
changes to the VMware driver.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Vitrage] New proposal for analysis.

2018-03-29 Thread MinWookKim
Hello Ifat and Vitrage team.
 
I would like to explain more about the implementation part of the mail I
sent last time.
 
The flow is as follows.
 
Vitrage-dashboard (action-list-panel) -> Vitrage-api -> check component
 
The last time I mentioned it as api-handler, it would be better to call the
check component directly from Vitarge-api without having to use it.
 
I hope this helps you understand.
 
Thank you
 
Best Regards,
Minwook. 
 
From: MinWookKim [mailto:delightw...@ssu.ac.kr] 
Sent: Wednesday, March 28, 2018 11:21 AM
To: 'OpenStack Development Mailing List (not for usage questions)'
Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis.
 
Hello Ifat, 
 
Thanks for your reply. : )
 
This proposal is a proposal that we expect to be useful from a user
perspective.

>From a manager's point of view, we need an implementation that minimizes
the overhead incurred by the proposal.

The answers to some of your questions are:
 
•  I assume that these checks will not be implemented in Vitrage, and
the results will not be stored in Vitrage, right? Vitrage role is to be a
place where it is easy and intuitive for the user to execute external
actions/checks.
 
Yes, that's right. We do not need to save it to Vitrage because we just
need to check the results. 
However, it is possible to implement the function directly in
Vitrage-dashboard separately from Vitrage like add-action-list panel, 
but it seems that it is not enough to implement all the functions. 
If you do not mind, we will have the following flow.

1. The user requests the check action from the vitrage-dashboard
(add-action-list-panel).
2. Call the check component through the vitrage's API handler.
3. The check component executes the command and returns the result.
 
Because it is my opinion only, please tell us if there is an unnecessary
part. :)
 
•  Do you expect the user to click an entity, select an action to run
(e.g. ‘P2P check’), and wait by the open panel for the results? What if the
user switches to another menu before the check is done? What if the user
asks to run an additional check in parallel? What if the user wants to see
again a previous result?
 
My idea was to select the task, wait for the results in an open panel, and
then instantly see it in the panel. 
If we switch to another menu before the scan is complete, we will not be
able to see the results.
Parallel checking is a matter of fact. (This can cause excessive overhead.)
For earlier results, it may be okay to temporarily save the open panel
until we exit the panel. We can see the previous results through the
temporary saved results.
 
•  Any thoughts of what component will implement those checks? Or maybe
these will be just scripts?
 
I think I implement a separate component to request it.
 
•   It could be nice if, as a result of an action check, a new alarm
will be raised in Vitrage. A specific alarm with the additional details that
were found. However, it might not be trivial to implement it. We could think
about it as phase #2.
 
It is expected to be really good. It would be very useful if an
Entity-Graph generates an alarm based on the check result.
I think that part will be able to talk in detail later.
My answer is my opinions and assumptions.
If you think my implementation is wrong, or an inefficient implementation,
please do not hesitate to tell me.
 
Thanks.
 
Best Regards,
Minwook. 
From: Afek, Ifat (Nokia - IL/Kfar Sava) [ 
mailto:ifat.a...@nokia.com] 
Sent: Wednesday, March 28, 2018 2:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis.
 
Hi Minwook,
 
I think that from a user’s perspective, these are very good ideas.
 
I have some questions regarding the UX and the implementation, since I’m
trying to think what could be the best way to execute such actions from
Vitrage.
 
* I assume that these checks will not be implemented in Vitrage,
and the results will not be stored in Vitrage, right? Vitrage role is to be
a place where it is easy and intuitive for the user to execute external
actions/checks.
* Do you expect the user to click an entity, select an action to
run (e.g. ‘P2P check’), and wait by the open panel for the results? What if
the user switches to another menu before the check is done? What if the user
asks to run an additional check in parallel? What if the user wants to see
again a previous result?
* Any thoughts of what component will implement those checks? Or
maybe these will be just scripts? 
* It could be nice if, as a result of an action check, a new alarm
will be raised in Vitrage. A specific alarm with the additional details that
were found. However, it might not be trivial to implement it. We could think
about it as phase #2.
 
Best Regards,
Ifat
 
 
From: MinWookKim <  delightw...@ssu.ac.kr>
Reply-To: "OpenStack Development Mailing List (not for 

Re: [openstack-dev] Thank you TryStack!!

2018-03-29 Thread Rich Bowen
A huge thank you to Will and Kambiz who maintained this service in 
addition to their real jobs, for all of these years. And a big thank you 
to the folks who have helped to retire the service and transition over 
to Passport.


--Rich

On 03/27/2018 06:51 AM, Jimmy Mcarthur wrote:

Hi everyone,

We recently made the tough decision, in conjunction with the dedicated 
volunteers that run TryStack, to end the service as of March 29, 2018. 
For those of you that used it, thank you for being part of the TryStack 
community.


The good news is that you can find more resources to try OpenStack at 
http://www.openstack.org/start, including the Passport Program 
, where you can test on any 
participating public cloud. If you are looking to test different tools 
or application stacks with OpenStack clouds, you should check out Open 
Lab .


Thank you very much to Will Foster, Kambiz Aghaiepour, Rich Bowen, and 
the many other volunteers who have managed this valuable service for the 
last several years!  Your contribution to OpenStack was noticed and 
appreciated by many in the community.



--
Rich Bowen: Community Architect
rbo...@redhat.com
@rbowen // @RDOCommunity // @CentOSProject
1 859 351 9166

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev