Re: [openstack-dev] [nova] nova aggregate and nova placement api aggregate

2018-05-24 Thread Matt Riedemann

On 5/24/2018 8:33 PM, Jeffrey Zhang wrote:
Recently, i am trying to implement a function which aggregate nova 
hypervisors
rather than nova compute host. But seems nova only aggregate 
nova-compute host.


On the other hand, since Ocata, nova depends on placement api which supports
aggregating resource providers. But nova-scheduler doesn't use this feature
now.

So  is there any better way to solve such issue? and is there any plan which
make nova legacy aggregate and placement api aggregate cloud work together?


There are some new features in Rocky [1] that involve resource provider 
aggregates for compute nodes which can be used for scheduling and will 
actually allow you to remove some older filters 
(AggregateMultiTenancyIsolation and AvailabilityZoneFilter). CERN is 
using these to improve performance with their cells v2 deployment.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-24 Thread Tim Bell
I'd like to understand the phrase "StarlingX is an OpenStack Foundation Edge 
focus area project".

My understanding of the current situation is that "StarlingX would like to be 
OpenStack Foundation Edge focus area project".

I have not been able to keep up with all of the discussions so I'd be happy for 
further URLs to help me understand the current situation and the processes 
(formal/informal) to arrive at this conclusion.

Tim

-Original Message-
From: Dean Troyer 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 23 May 2018 at 11:08
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

On Wed, May 23, 2018 at 11:49 AM, Colleen Murphy  
wrote:
> It's also important to make the distinction between hosting something on 
openstack.org infrastructure and recognizing it in an official capacity. 
StarlingX is seeking both, but in my opinion the code hosting is not the 
problem here.

StarlingX is an OpenStack Foundation Edge focus area project and is
seeking to use the CI infrastructure.  There may be a project or two
contained within that may make sense as OpenStack projects in the
not-called-big-tent-anymore sense but that is not on the table, there
is a lot of work to digest before we could even consider that.  Is
that the official capacity you are talking about?

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova aggregate and nova placement apiaggregate

2018-05-24 Thread Chen CH Ji
not sure whether it will helpful , FYI

https://developer.openstack.org/api-ref/placement/

The primary differences between Nova’s host aggregates and placement
aggregates are the following:
  In Nova, a host aggregate associates a nova-compute service with
  other nova-compute services. Placement aggregates are not specific to
  a nova-compute service and are, in fact, not compute-specific at all.
  A resource provider in the Placement API is generic, and placement
  aggregates are simply groups of generic resource providers. This is
  an important difference especially for Ironic, which when used with
  Nova, has many Ironic baremetal nodes attached to a single
  nova-compute service. In the Placement API, each Ironic baremetal
  node is its own resource provider and can therefore be associated to
  other Ironic baremetal nodes via a placement aggregate association.
  In Nova, a host aggregate may have metadata key/value pairs attached
  to it. All nova-compute services associated with a Nova host
  aggregate share the same metadata. Placement aggregates have no such
  metadata because placement aggregates only represent the grouping of
  resource providers. In the Placement API, resource providers are
  individually decorated with traits that provide qualitative
  information about the resource provider.
  In Nova, a host aggregate dictates the availability zone within which
  one or more nova-compute services reside. While placement aggregates
  may be used to model availability zones, they have no inherent
  concept thereof.

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82451493
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Jeffrey Zhang 
To: OpenStack Development Mailing List

Date:   05/25/2018 11:34 AM
Subject:[openstack-dev] [nova] nova aggregate and nova placement api
aggregate



Recently, i am trying to implement a function which aggregate nova
hypervisors
rather than nova compute host. But seems nova only aggregate nova-compute
host.

On the other hand, since Ocata, nova depends on placement api which
supports
aggregating resource providers. But nova-scheduler doesn't use this feature
now.

So  is there any better way to solve such issue? and is there any plan
which
make nova legacy aggregate and placement api aggregate cloud work together?


--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] nova aggregate and nova placement api aggregate

2018-05-24 Thread Jeffrey Zhang
Recently, i am trying to implement a function which aggregate nova
hypervisors
rather than nova compute host. But seems nova only aggregate nova-compute
host.

On the other hand, since Ocata, nova depends on placement api which supports
aggregating resource providers. But nova-scheduler doesn't use this feature
now.

So  is there any better way to solve such issue? and is there any plan which
make nova legacy aggregate and placement api aggregate cloud work together?


-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Status of neutron-rpc-server

2018-05-24 Thread Thomas Goirand
Hi,

I'd like to know what's the status of neutron-rpc-server. As I switched
the Debian package from neutron-server to neutron-api using uwsgi, I
tried using it, and it seems it kind of works, if I apply this patch:

https://review.openstack.org/#/c/555608

Is there anything else that I should know?

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Need some feedback on the proposed heal_allocations CLI

2018-05-24 Thread Matt Riedemann
I've written a nova-manage placement heal_allocations CLI [1] which was 
a TODO from the PTG in Dublin as a step toward getting existing 
CachingScheduler users to roll off that (which is deprecated).


During the CERN cells v1 upgrade talk it was pointed out that CERN was 
able to go from placement-per-cell to centralized placement in Ocata 
because the nova-computes in each cell would automatically recreate the 
allocations in Placement in a periodic task, but that code is gone once 
you're upgraded to Pike or later.


In various other talks during the summit this week, we've talked about 
things during upgrades where, for instance, if placement is down for 
some reason during an upgrade, a user deletes an instance and the 
allocation doesn't get cleaned up from placement so it's going to 
continue counting against resource usage on that compute node even 
though the server instance in nova is gone. So this CLI could be 
expanded to help clean up situations like that, e.g. provide it a 
specific server ID and the CLI can figure out if it needs to clean 
things up in placement.


So there are plenty of things we can build into this, but the patch is 
already quite large. I expect we'll also be backporting this to stable 
branches to help operators upgrade/fix allocation issues. It already has 
several things listed in a code comment inline about things to build 
into this later.


My question is, is this good enough for a first iteration or is there 
something severely missing before we can merge this, like the automatic 
marker tracking mentioned in the code (that will probably be a 
non-trivial amount of code to add). I could really use some operator 
feedback on this to just take a look at what it already is capable of 
and if it's not going to be useful in this iteration, let me know what's 
missing and I can add that in to the patch.


[1] https://review.openstack.org/#/c/565886/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-24 Thread Dan Smith
> For example, I look at your nova fork and it has a "don't allow this
> call during an upgrade" decorator on many API calls. Why wasn't that
> done upstream? It doesn't seem overly controversial, so it would be
> useful to understand the reasoning for that change.

Interesting. We have internal accounting for service versions and can
make a determination of if we're in an upgrade scenario (and do block
operations until the upgrade is over). Unless this decorator you're
looking at checks some non-upstream is-during-upgrade flag, this would
be an easy thing to close the gap on.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [cinder]

2018-05-24 Thread Ivan Kolodyazhny
Hello,

Please, try `cinder --os-volume-api-versio=3 manageable-list
openstack-dev@VMAX_ISCSI_DIAMOND#Diamond+DSS+SRP_1+000297000333` or
`OS_VOLUME_API_VERSION=3 cinder manageable-list openstack-dev@VMAX_ISCSI_
DIAMOND#Diamond+DSS+SRP_1+000297000333`

Devstack used Cinder API v2 by default before
https://review.openstack.org/#/c/566747/ was merged


Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Thu, May 24, 2018 at 2:34 AM, Walsh, Helen  wrote:

> Sending on Michael’s behalf…
>
>
>
>
>
> *From:* McAleer, Michael
> *Sent:* Monday 21 May 2018 15:18
> *To:* 'openstack-dev@lists.openstack.org'
> *Subject:* FW: [openstack-dev] [cinder]
>
>
>
> Hi Cinder Devs,
>
>
>
> I would like to ask a question concerning Cinder CLI commands in DevStack
> 13.0.0.0b2.dev167.
>
>
>
> I stacked a clean environment this morning to run through some sanity
> tests of new features, two of which are list manageable volumes
> 
> and snapshots
> .
> When I attempt to run this command using Cinder CLI I am getting an invalid
> choice error in response:
>
>
>
> stack@openstack-dev:~/devstack$ cinder manageable-list
> openstack-dev@VMAX_ISCSI_DIAMOND#Diamond+DSS+SRP_1+000297000333
>
> [usage output omitted]
>
> error: argument : invalid choice: u'manageable-list'
>
>
>
> The same behaviour can be seen for listing manageable-snapshots also,
> invalid choice error. I looked for a similar command using the OpenStack
> Volume CLI but there wasn’t any similar commands found which would return a
> list of manageable volumes or snapshots.
>
>
> I didn’t see any deprecation notices for the command, and the commands
> worked fine in earlier DevStack environments in this Rocky dev cycle, so
> just wondering what the status is of the commands and if this is possibly
> an oversight.
>
>
>
> Thanks!
>
> Michael
>
>
>
> *Michael McAleer*
>
> Software Engineer 1, Core Technologies
>
> *Dell **EMC **| *Enterprise Storage Division
>
> Phone: +353 21 428 1729
>
> michael.mcal...@dell.com
>
> Ireland COE, Ovens, Co. Cork, Ireland
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci][infra] Quickstart Branching

2018-05-24 Thread Bogdan Dobrelya

On 5/23/18 6:49 PM, Sagi Shnaidman wrote:

Alex,

the problem is that you're working and focusing mostly on release 
specific code like featuresets and some scripts. But 
tripleo-quickstart(-extras) and tripleo-ci is much *much* more than set 
of featuresets. Only 10% of the code may be related to releases and 
branches, while other 90% is completely independent and not related to 
releases.


So in 90% code we DO need to backport every change, take for example the 
latest patch to extras: https://review.openstack.org/#/c/570167/, it's 
fixing reproducer. If oooq-extra was branched, we would need to backport 
this fix to every and every branch. And the same for all other 90% of 
code, which is complete nonsense.
Just because not using "{% if release %}" construct - to block the whole 
work of CI team and make the CI code is absolutely unmaintainable?


Some of release related templates we moved recently from tripleo-ci to 
THT repo like scenarios, OC templates, etc. If we discover another 
things in oooq that could be moved to branched THT I'd be only happy for 
that.


Sometimes it could be hard to maintain one file in extras templates with 
different logic for releases, like we have in tempest configuration for 
example. The solution is to create a few release-related templates and 
use one that match the current branch. It doesn't affect 90% of code and 
still "branch-like" approach. But I didn't see other scripts that are so 
release dependent. If we'll have ones, we could do the same. For now I 
see "{% if release %}" construct working very well.


I didn't see still any advantage of branching CI code, except of a 
little bit nicer jinja templates without "{% if release ", but amount of 
disadvantages is so huge, that it'll literally block all current work in CI.


[tl;dr] branching allows to not run cloned branched jobs against master 
patches. Or patches will wait longer in queues, and fail more often cuz 
of intermittent infra issues. See explanation and some calculations below.


So my main concern against additional stable release cloned jobs 
executed for master branches is that there is an "infra failure fee", 
which is a failure unrelated to the patch under check or gate, like an 
intermittent connectivity/timeout inducted failure. This normally is 
followed by a 'recheck' comment posted by an engineer, and sometimes is 
noticed by the elastic recheck bot as well. Say, that sort of a failure 
has a probability of N. And the real "product failure", which is related 
to the subject patch and not infra, takes P. So chances to fail for a job is


F = (1 - ((1 - N)*(1 - P)).

Now that we have added a two more "branched clones" for RDO CI OVB jobs 
and a two more zuul jobs, we have this equation as


F = (1 - ((1 - N)^4*(1 - P)).

(I assumed the chances to face a product defect for the cloned branched 
jobs remain unchanged).


This might bring significantly increased chances to fail (see some 
examples [0] for the N/P distribution cases). So folks will start 
posting 'recheck' comments now even more often, like x2 times more 
often. Which would make zuul and RDO CI queues larger, and patches 
sitting there longer - ending up with more time to wait for jobs to 
start its check/gate pipelines. That's what I call 'recheck storms'. And 
w/o branched quickstart/extras, we might have those storms amplified, 
tho that fully depends on real N/P distributions.


[0] https://pastebin.com/ckG5G7NG



Thanks



On Wed, May 23, 2018 at 7:04 PM, Alex Schultz > wrote:


On Wed, May 23, 2018 at 8:30 AM, Sagi Shnaidman > wrote:
> Hi, Sergii
>
> thanks for the question. It's not first time that this topic is raised and
> from first view it could seem that branching would help to that sort of
> issues.
>
> Although it's not the case. Tripleo-quickstart(-extras) is part of CI 
code,
> as well as tripleo-ci repo which have never been branched. The reason for
> that is relative small impact on CI code from product branching. Think 
about
> backport almost *every* patch to oooq and extras to all supported 
branches,
> down to newton at least. This will be a really *huge* price and non
> reasonable work. Just think about active maintenance of 3-4 versions of CI
> code in each of 3 repositories. It will take all time of CI team with 
almost
> zero value of this work.
>

So I'm not sure I completely agree with this assessment as there is a
price paid for every {%if release in [...]%} that we have to carry in
oooq{,-extras}.  These go away if we branch because we don't have to
worry about breaking previous releases or current release (which may
or may not actually have CI results).

> What regards patch you listed, we would have backport this change to 
*every*
> branch, and it wouldn't really help to avoid the issue. The source of

[openstack-dev] [tripleo] Using derive parameters workflow for FixedIPs

2018-05-24 Thread Saravanan KR
As discussed in the IRC over , here is the outline:

* Derive parameters workflow could be used for deriving FixedIPs
parameters also (started as part of the review
https://review.openstack.org/#/c/569818/)
* Above derivation should be done for all the deployments, so invoking
of derive parameters should be brought out side the "-p" option check
* But still invoking the NFV and HCI formulas should be based on the
user option. Either add a condition by using the existing
workflow_parameter of the feature [or] introduce a workflow_parameter
to control the user preference
* In the derive params workflow, we need to bring in the separation on
whether, we need introspection data or not. Based on user preference
and feature presence, add checks to see if introspection data is
required. If we don't do this, then introspection will be become
mandatory for all deployments.
* Merging of parameter will be same as existing with preference to the
user provided parameters

Future Enhancement

* Instead of using plan-environment.yaml, write the derived parameters
to a separate environment file, add add it to environments list of
plan-environment.yaml to allow heat merging to work
https://review.openstack.org/#/c/448209

Regards,
Saravanan KR

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FW: [cinder]

2018-05-24 Thread Walsh, Helen
Sending on Michael's behalf...


From: McAleer, Michael
Sent: Monday 21 May 2018 15:18
To: 'openstack-dev@lists.openstack.org'
Subject: FW: [openstack-dev] [cinder]

Hi Cinder Devs,

I would like to ask a question concerning Cinder CLI commands in DevStack 
13.0.0.0b2.dev167.

I stacked a clean environment this morning to run through some sanity tests of 
new features, two of which are list manageable 
volumes
 and 
snapshots.
 When I attempt to run this command using Cinder CLI I am getting an invalid 
choice error in response:

stack@openstack-dev:~/devstack$ cinder manageable-list 
openstack-dev@VMAX_ISCSI_DIAMOND#Diamond+DSS+SRP_1+000297000333
[usage output omitted]
error: argument : invalid choice: u'manageable-list'

The same behaviour can be seen for listing manageable-snapshots also, invalid 
choice error. I looked for a similar command using the OpenStack Volume CLI but 
there wasn't any similar commands found which would return a list of manageable 
volumes or snapshots.

I didn't see any deprecation notices for the command, and the commands worked 
fine in earlier DevStack environments in this Rocky dev cycle, so just 
wondering what the status is of the commands and if this is possibly an 
oversight.

Thanks!
Michael

Michael McAleer
Software Engineer 1, Core Technologies
Dell EMC | Enterprise Storage Division
Phone: +353 21 428 1729
michael.mcal...@dell.com
Ireland COE, Ovens, Co. Cork, Ireland
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposing Mark Goddard to ironic-core

2018-05-24 Thread Yolanda Robla Mota
+1 .. his reviews have always been helpful to me

On Thu, May 24, 2018 at 7:57 AM, Shivanand Tendulker 
wrote:

> +1 from me.
>
>
>
> On Sun, May 20, 2018 at 8:15 PM, Julia Kreger  > wrote:
>
>> Greetings everyone!
>>
>> I would like to propose Mark Goddard to ironic-core. I am aware he
>> recently joined kolla-core, but his contributions in ironic have been
>> insightful and valuable. The kind of value that comes from operative use.
>>
>> I also make this nomination knowing that our community landscape is
>> changing and that we must not silo our team responsibilities or ability to
>> move things forward to small highly focused team. I trust Mark to use his
>> judgement as he has time or need to do so. He might not always have time,
>> but I think at the end of the day, we’re all in that same boat.
>>
>> -Julia
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Yolanda Robla Mota

Principal Software Engineer, RHCE

Red Hat



C/Avellana 213

Urb Portugal

yrobl...@redhat.comM: +34605641639


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev