Re: [openstack-dev] [tripleo] Recap of Python 3 testing session at PTG

2018-04-18 Thread 赵超
On Fri, Apr 13, 2018 at 8:48 PM, Thomas Goirand  wrote:

> On 03/17/2018 09:34 AM, Emilien Macchi wrote:
>
> The other one that isn't Py3 ready *in stable* is trove-dashboard. I
> have sent backport patches, but they were not approved because of the
> stable gate having issues:
> https://review.openstack.org/#/c/554680/
> https://review.openstack.org/#/c/554681/
> https://review.openstack.org/#/c/554682/
> https://review.openstack.org/#/c/554683/
>
> The team had plans to make this pass (by temporarily fixing the gate)
> but so far, it hasn't happened.
>

​Just FYI, these patches have been merged already today​.

Thanks for reporting this and pushing them to the Queens branch.


-- 
To be free as in freedom.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [osc][swift] Setting storage policy for a container possible via the client?

2018-04-18 Thread Mark Kirkwood
Swift has had storage policies for a while now. These are enabled by 
setting the 'X-Storage-Policy' header on a container.


It looks to me like this is not possible using openstack-client (even in 
master branch) - while there is a 'set' operation for containers this 
will *only* set  'Meta-*' type headers.


It seems to me that adding this would be highly desirable. Is it in the 
pipeline? If not I might see how much interest there is at my end for 
adding such - as (famous last words) it looks pretty straightforward to do.


regards

Mark


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][trove] keep trove-stable-maint members up-to-date

2018-04-18 Thread 赵超
Matt,

Thanks a lot!

On Thu, Apr 19, 2018 at 2:01 AM, Matt Riedemann  wrote:

> On 4/17/2018 8:49 PM, 赵超 wrote:
>
>> Thanks for approving the stable branch patches of trove and python-trove,
>> we also have some in the trove-dashboard.
>>
>
> I also went through the trove-dashboard ones, just need another
> stable-maint-core to approve those.
>
> https://review.openstack.org/#/q/project:openstack/trove-das
> hboard+status:open+NOT+branch:master
>
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
To be free as in freedom.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] How to take over a project?

2018-04-18 Thread Sangho Shin
Ian, 

Thank you so much for your help.
I have requested Vikram to add me to the release team. 
He should be able to help me. :-)

Sangho


> On 19 Apr 2018, at 8:36 AM, Ian Wienand  wrote:
> 
> On 04/19/2018 01:19 AM, Ian Y. Choi wrote:
>> By the way, since the networking-onos-release group has no neutron
>> release team group, I think infra team can help to include neutron
>> release team and neutron release team can help to create branches
>> for the repo if there is no reponse from current
>> networking-onos-release group member.
> 
> This seems sane and I've added neutron-release to
> networking-onos-release.
> 
> I'm hesitant to give advice on branching within a project like neutron
> as I'm sure there's stuff I'm not aware of; but members of the
> neutron-release team should be able to get you going.
> 
> Thanks,
> 
> -i


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] How to take over a project?

2018-04-18 Thread Sangho Shin
Vikram,

According to https://review.openstack.org/#/admin/groups/1002,members 
, you are the member 
of networking-onos release team.
Can you please add me to the group so that I can create a new branch?


Thank you,

Sangho


> On 19 Apr 2018, at 12:19 AM, Ian Y. Choi  wrote:
> 
> Hello Sangho,
> 
> When I see 
> https://review.openstack.org/#/admin/projects/openstack/networking-onos,access
>  
> 
>  page,
> it seems that networking-onos-release group members can create stable 
> branches for the repository.
> 
> By the way, since the networking-onos-release group has no neutron release 
> team group,
> I think infra team can help to include neutron release team and neutron 
> release team can help to create branches
> for the repo if there is no reponse from current networking-onos-release 
> group member.
> 
> 
> Might this help you?
> 
> 
> With many thanks,
> 
> /Ian
> 
> Sangho Shin wrote on 4/18/2018 2:48 PM:
>> Hello, Ian
>> 
>> I am trying to add a new stable branch in the networking-onos, following the 
>> page you suggested.
>> 
>> 
>>Create stable/* Branch¶
>>
>> > >
>> 
>> For OpenStack projects this should be performed by the OpenStack Release 
>> Management Team at the Release Branch Point. If you are managing branches 
>> for your project you may have permission to do this yourself.
>> 
>>  * Go to https://review.openstack.org/ and sign in
>>  * Select ‘Admin’, ‘Projects’, then the project
>>  * Select ‘Branches’
>>  * Enter |stable/| in the ‘Branch Name’ field, and |HEAD| as
>>the ‘Initial Revision’, then press ‘Create Branch’. Alternatively,
>>you may run |git branch stable/  && git push gerrit
>>stable/|
>> 
>> 
>> However, after I login, I cannot see the ‘Admin’ and also I cannot create a 
>> new branch. Do I need an additional authority for it?
>> BTW, I am a member of networking-onos-core team, as you know.
>> 
>> Thank you,
>> 
>> Sangho
>> 
>> 
>> 
>>> On 18 Apr 2018, at 9:00 AM, Sangho Shin >>  >> >> wrote:
>>> 
>>> Ian and Gary,
>>> 
>>> Thank you so much for your answer.
>>> I will try what you suggested.
>>> 
>>> Thank you,
>>> 
>>> Sangho
>>> 
 On 17 Apr 2018, at 7:47 PM, Gary Kotton  >> wrote:
 
 Hi,
 You either need one of the ono core team or the neutron release team to 
 add you. FYI -https://review.openstack.org/#/admin/groups/1001,members 
 
 Thanks
 Gary
 *From:*Sangho Shin  >>
 *Reply-To:*OpenStack List  
 >>
 *Date:*Tuesday, April 17, 2018 at 5:01 AM
 *To:*OpenStack List  
 >>
 *Subject:*[openstack-dev] [openstack-infra] How to take over a project?
 Dear OpenStack Infra team,
 I would like to know how to take over an OpenStack project.
 I am a committer of the networking-onos project 
 (https://github.com/openstack/networking-onos
  
 ),
  and I would like to take over the project.
 The current maintainer (cc’d) has already agreed with that.
 Please let me know the process to take over (or change the maintainer of) 
 the project.
 BTW, it looks like even the current maintainer cannot create a new branch 
 of the codes. How can we get the authority to create a new branch?
 Thank you,
 Sangho
 __
 OpenStack 

Re: [openstack-dev] [openstack-infra] How to take over a project?

2018-04-18 Thread Ian Wienand

On 04/19/2018 01:19 AM, Ian Y. Choi wrote:

By the way, since the networking-onos-release group has no neutron
release team group, I think infra team can help to include neutron
release team and neutron release team can help to create branches
for the repo if there is no reponse from current
networking-onos-release group member.


This seems sane and I've added neutron-release to
networking-onos-release.

I'm hesitant to give advice on branching within a project like neutron
as I'm sure there's stuff I'm not aware of; but members of the
neutron-release team should be able to get you going.

Thanks,

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][infra][qa] Jobs failing; pep8 not found

2018-04-18 Thread Julia Kreger
>
> This looks like some behavior that has been pulled out as part of pbr 4
> (version 3 is being used in the stable branch). Perhaps we want to
> update the pbr constraint there to use the newer version?
>

And it looks like doing that for stable/queens, at least from an
ironic-inspector point of view[1], fixes the issue for the branch. The
funny thing is, our ironic-inspector stable/pike -> stable/queens test
job fails on stable/pike as well now, with the same failure [2]. That
being said, we did observe during troubleshooting this issue last week
that the pep8 dist-info was present, however the actual module
contents were not present, which is why we worked around the issue
forcing the module to be re-installed.

We also had this occur today on an ironic stable/queens backport
triggered grenade job when keystone was being upgraded [3].

If the answer is update the upper constraint, from my point of view, I
suspect we're going to want to consider doing it across the board.

Of course, the real question is what changed, that is causing test
machines to think pep8 is present... :(

[1]: https://review.openstack.org/#/c/562384/
[2]: 
http://logs.openstack.org/84/562384/2/check/ironic-inspector-grenade-dsvm/59f0605/logs/grenade.sh.txt.gz#_2018-04-18_21_53_20_527
[3]: 
http://logs.openstack.org/14/562314/1/check/ironic-grenade-dsvm/2227c41/logs/grenade.sh.txt.gz#_2018-04-18_16_55_00_456

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Eric Fried
> I have a feeling we're just going to go back and forth on this, as we
> have for weeks now, and not reach any conclusion that is satisfactory to
> everyone. And we'll delay, yet again, getting functionality into this
> release that serves 90% of use cases because we are obsessing over the
> 0.01% of use cases that may pop up later.

So I vote that, for the Rocky iteration of the granular spec, we add a
single `proximity={isolate|any}` qparam, required when any numbered
request groups are specified.  I believe this allows us to satisfy the
two NUMA use cases we care most about: "forced sharding" and "any fit".
And as you demonstrated, it leaves the way open for finer-grained and
more powerful semantics to be added in the future.

-efried

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Jay Pipes

On 04/18/2018 04:52 PM, Eric Fried wrote:

I can't tell if you're being facetious, but this seems sane, albeit
complex.  It's also extensible as we come up with new and wacky affinity
semantics we want to support.


I was not being facetious.


I can't say I'm sold on requiring `proximity` qparams that cover every
granular group - that seems like a pretty onerous burden to put on the
user right out of the gate.


I did that because Matt said he wanted no default/implicit behaviour -- 
everything should be explicit.


> That said, the idea of not having a default

is quite appealing.  Perhaps as a first pass we can require a single
?proximity={isolate|any} and build on it to support group numbers (etc.)
in the future.


Here's my problem.

I have a feeling we're just going to go back and forth on this, as we 
have for weeks now, and not reach any conclusion that is satisfactory to 
everyone. And we'll delay, yet again, getting functionality into this 
release that serves 90% of use cases because we are obsessing over the 
0.01% of use cases that may pop up later.


Best,
-jay


One other thing inline below, not related to the immediate subject.

On 04/18/2018 12:40 PM, Jay Pipes wrote:

On 04/18/2018 11:58 AM, Matt Riedemann wrote:

On 4/18/2018 9:06 AM, Jay Pipes wrote:

"By default, should resources/traits submitted in different numbered
request groups be supplied by separate resource providers?"


Without knowing all of the hairy use cases, I'm trying to channel my
inner sdague and some of the similar types of discussions we've had to
changes in the compute API, and a lot of the time we've agreed that we
shouldn't assume a default in certain cases.

So for this case, if I'm requesting numbered request groups, why
doesn't the API just require that I pass a query parameter telling it
how I'd like those requests to be handled, either via affinity or
anti-affinity

So, you're thinking maybe something like this?

1) Get me two dedicated CPUs. One of those dedicated CPUs must have AVX2
capabilities. They must be on different child providers (different NUMA
cells that are providing those dedicated CPUs).

GET /allocation_candidates?

  resources1=PCPU:1=HW_CPU_X86_AVX2
=PCPU:1
=isolate:1,2

2) Get me four dedicated CPUs. Two of those dedicated CPUs must have
AVX2 capabilities. Two of the dedicated CPUs must have the SSE 4.2
capability. They may come from the same provider (NUMA cell) or
different providers.

GET /allocation_candidates?

  resources1=PCPU:2=HW_CPU_X86_AVX2
=PCPU:2=HW_CPU_X86_SSE42
=any:1,2

3) Get me 2 dedicated CPUs and 2 SR-IOV VFs. The VFs must be provided by
separate physical function providers which have different traits marking
separate physical networks. The dedicated CPUs must come from the same
provider tree in which the physical function providers reside.

GET /allocation_candidates?

  resources1=PCPU:2
=SRIOV_NET_VF:1=CUSTOM_PHYSNET_A
=SRIOV_NET_VF:1=CUSTOM_PHYSNET_B
=isolate:2,3
=same_tree:1,2,3

3) Get me 2 dedicated CPUs and 2 SR-IOV VFs. The VFs must be provided by
separate physical function providers which have different traits marking
separate physical networks. The dedicated CPUs must come from the same
provider *subtree* in which the second group of VF resources are sourced.

GET /allocation_candidates?

  resources1=PCPU:2
=SRIOV_NET_VF:1=CUSTOM_PHYSNET_A
=SRIOV_NET_VF:1=CUSTOM_PHYSNET_B
=isolate:2,3
=same_subtree:1,3


The 'same_subtree' concept requires a way to identify how far up the
common ancestor can be.  Otherwise, *everything* is in the same subtree.
  You could arbitrarily say "one step down from the root", but that's not
very flexible.  Allowing the user to specify a *number* of steps down
from the root is getting closer, but it requires the user to have an
understanding of the provider tree's exact structure, which is not ideal.

The idea I've been toying with here is "common ancestor by trait".  For
example, you would tag your NUMA node providers with trait NUMA_ROOT,
and then your request would include:

   ...
   =common_ancestor_by_trait:NUMA_ROOT:1,3



4) Get me 4 SR-IOV VFs. 2 VFs should be sourced from a provider that is
decorated with the CUSTOM_PHYSNET_A trait. 2 VFs should be sourced from
a provider that is decorated with the CUSTOM_PHYSNET_B trait. For HA
purposes, none of the VFs should be sourced from the same provider.
However, the VFs for each physical network should be within the same
subtree (NUMA cell) as each other.

GET /allocation_candidates?

  resources1=SRIOV_NET_VF:1=CUSTOM_PHYSNET_A
=SRIOV_NET_VF:1=CUSTOM_PHYSNET_A
=SRIOV_NET_VF:1=CUSTOM_PHYSNET_B
=SRIOV_NET_VF:1=CUSTOM_PHYSNET_B
=isolate:1,2,3,4
=same_subtree:1,2
=same_subtree:3,4

We can go even deeper if you'd like, since NFV means "never-ending
feature velocity". Just let me know.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Eric Fried
Sorry, addressing gaffe, bringing this back on-list...

On 04/18/2018 04:36 PM, Ed Leafe wrote:
> On Apr 18, 2018, at 4:11 PM, Eric Fried  wrote:
>>> That makes a lot of sense. Since we are already suffixing the query param 
>>> “resources” to indicate granular, why not add a clarifying term to that 
>>> suffix? E.g., “resources1=“ -> “resources1d” (for ‘different’). The exact 
>>> string we use can be bike shedded, but requiring it be specified sounds 
>>> pretty sane to me.
>>  I'm not understanding what you mean here.  The issue at hand is how
>> numbered groups interact with *each other*.  If I said
>> resources1s=...=..., what am I saying about whether the
>> resources in group 1 can or can't land in the same RP as those of group 2?
> OK, sorry. What I meant by the ‘d’ was that that group’s resources must be 
> from a different provider than any other group’s resources (anti-affinity). 
> So in your example, you don’t care if group1 is from the same provider, but 
> you do with group2, so that’s kind of a contradictory set-up (unless you had 
> other groups).
>
> Instead, if the example were changed to 
> resources1s=...=..=…, then groups 1 and 3 could be 
> allocated from the same provider.
>
> -- Ed Leafe

This is a cool idea.  It doesn't allow the same level of granularity as
being able to list explicit group numbers to be [anti-]affinitized with
specific other groups - but I'm not sure we need that.  I would have to
think through the use cases with this in mind.

-efried

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Eric Fried
> Cool. So let's not use a GET for this and instead change it to a POST
> with a request body that can more cleanly describe what the user is
> requesting, which is something we talked about a long time ago.

I kinda doubt we could agree on a format for this in the Rocky
timeframe.  But for the sake of curiosity, I'd like to see some strawman
proposals for what that request body would look like.  Here's a couple
off the top:

{
  "anti-affinity": [
  {
  "resources": { $RESOURCE_CLASS: amount, ... },
  "required": [ $TRAIT, ... ],
  "forbidden": [ $TRAIT, ... ],
  },
  ...
  ],
  "affinity": [
  ...
  ],
  "any fit": [
  ...
  ],
}

Or maybe:

{
  $ARBITRARY_USER_SPECIFIED_KEY_DESCRIBING_THE_GROUP: {
  "resources": { $RESOURCE_CLASS: amount, ... },
  "required": [ $TRAIT, ... ],
  "forbidden": [ $TRAIT, ... ],
  },
  ...
  "affinity_spec": {
  "isolate": [ $ARBITRARY_KEY, ... ],
  "any": [ $ARBITRARY_KEY, ... ],
  "common_subtree_by_trait": {
  "groups": [ $ARBITRARY_KEY, ... ],
  "traits": [ $TRAIT, ... ],
  },
  }
}

(I think we also now need to fold multiple `member_of` in there somehow.
 And `limit` - does that stay in the querystring?  Etc.)

-efried

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Eric Fried
Chris-

Going to accumulate a couple of your emails and answer them.  I could
have answered them separately (anti-affinity).  But in this case I felt
it appropriate to provide responses in a single note (best fit).

> I'm a bit conflicted.  On the one hand...

> On the other hand,

Right; we're in agreement that we need to handle both.

> I'm half tempted to side with mriedem and say that there is no default
> and it must be explicit, but I'm concerned that this would make the
> requests a lot larger if you have to specify it for every resource. 
and
> The request might get unwieldy if we have to specify affinity/anti-
> affinity for each resource.  Maybe you could specify the default for
> the request and then optionally override it for each resource?

Yes, good call.  I'm favoring this as a first pass.  See my other response.

> In either viewpoint, is there a way to represent "I want two resource
> groups, with resource X in each group coming from different resource
> providers (anti-affinity) and resource Y from the same resource provider
> (affinity)?

As proposed, yes.  Though if we go with the above (one flag to specify
request-wide behavior) then there wouldn't be that ability beyond
putting things in the un-numbered vs. numbered groups.  So I guess my
question is: do we have a use case *right now* that requires supporting
"isolate for some, unrestricted for others"?

> I'm not current on the placement implementation details, but would
> this level of flexibility cause complexity problems in the code?

Oh, implementing this is complex af.  Here's what it takes *just* to
satisfy the "any fit" version:

https://review.openstack.org/#/c/517757/10/nova/api/openstack/placement/objects/resource_provider.py@3599

I've made some progress implementing "proximity=isolate:X,Y,..." in my
sandbox, and that's even hairier.  Doing "proximity=isolate"
(request-wide policy) would be a little easier.

-efried

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Eric Fried
I can't tell if you're being facetious, but this seems sane, albeit
complex.  It's also extensible as we come up with new and wacky affinity
semantics we want to support.

I can't say I'm sold on requiring `proximity` qparams that cover every
granular group - that seems like a pretty onerous burden to put on the
user right out of the gate.  That said, the idea of not having a default
is quite appealing.  Perhaps as a first pass we can require a single
?proximity={isolate|any} and build on it to support group numbers (etc.)
in the future.

One other thing inline below, not related to the immediate subject.

On 04/18/2018 12:40 PM, Jay Pipes wrote:
> On 04/18/2018 11:58 AM, Matt Riedemann wrote:
>> On 4/18/2018 9:06 AM, Jay Pipes wrote:
>>> "By default, should resources/traits submitted in different numbered
>>> request groups be supplied by separate resource providers?"
>>
>> Without knowing all of the hairy use cases, I'm trying to channel my
>> inner sdague and some of the similar types of discussions we've had to
>> changes in the compute API, and a lot of the time we've agreed that we
>> shouldn't assume a default in certain cases.
>>
>> So for this case, if I'm requesting numbered request groups, why
>> doesn't the API just require that I pass a query parameter telling it
>> how I'd like those requests to be handled, either via affinity or
>> anti-affinity
> So, you're thinking maybe something like this?
> 
> 1) Get me two dedicated CPUs. One of those dedicated CPUs must have AVX2
> capabilities. They must be on different child providers (different NUMA
> cells that are providing those dedicated CPUs).
> 
> GET /allocation_candidates?
> 
>  resources1=PCPU:1=HW_CPU_X86_AVX2
> =PCPU:1
> =isolate:1,2
> 
> 2) Get me four dedicated CPUs. Two of those dedicated CPUs must have
> AVX2 capabilities. Two of the dedicated CPUs must have the SSE 4.2
> capability. They may come from the same provider (NUMA cell) or
> different providers.
> 
> GET /allocation_candidates?
> 
>  resources1=PCPU:2=HW_CPU_X86_AVX2
> =PCPU:2=HW_CPU_X86_SSE42
> =any:1,2
> 
> 3) Get me 2 dedicated CPUs and 2 SR-IOV VFs. The VFs must be provided by
> separate physical function providers which have different traits marking
> separate physical networks. The dedicated CPUs must come from the same
> provider tree in which the physical function providers reside.
> 
> GET /allocation_candidates?
> 
>  resources1=PCPU:2
> =SRIOV_NET_VF:1=CUSTOM_PHYSNET_A
> =SRIOV_NET_VF:1=CUSTOM_PHYSNET_B
> =isolate:2,3
> =same_tree:1,2,3
> 
> 3) Get me 2 dedicated CPUs and 2 SR-IOV VFs. The VFs must be provided by
> separate physical function providers which have different traits marking
> separate physical networks. The dedicated CPUs must come from the same
> provider *subtree* in which the second group of VF resources are sourced.
> 
> GET /allocation_candidates?
> 
>  resources1=PCPU:2
> =SRIOV_NET_VF:1=CUSTOM_PHYSNET_A
> =SRIOV_NET_VF:1=CUSTOM_PHYSNET_B
> =isolate:2,3
> =same_subtree:1,3

The 'same_subtree' concept requires a way to identify how far up the
common ancestor can be.  Otherwise, *everything* is in the same subtree.
 You could arbitrarily say "one step down from the root", but that's not
very flexible.  Allowing the user to specify a *number* of steps down
from the root is getting closer, but it requires the user to have an
understanding of the provider tree's exact structure, which is not ideal.

The idea I've been toying with here is "common ancestor by trait".  For
example, you would tag your NUMA node providers with trait NUMA_ROOT,
and then your request would include:

  ...
  =common_ancestor_by_trait:NUMA_ROOT:1,3

> 
> 4) Get me 4 SR-IOV VFs. 2 VFs should be sourced from a provider that is
> decorated with the CUSTOM_PHYSNET_A trait. 2 VFs should be sourced from
> a provider that is decorated with the CUSTOM_PHYSNET_B trait. For HA
> purposes, none of the VFs should be sourced from the same provider.
> However, the VFs for each physical network should be within the same
> subtree (NUMA cell) as each other.
> 
> GET /allocation_candidates?
> 
>  resources1=SRIOV_NET_VF:1=CUSTOM_PHYSNET_A
> =SRIOV_NET_VF:1=CUSTOM_PHYSNET_A
> =SRIOV_NET_VF:1=CUSTOM_PHYSNET_B
> =SRIOV_NET_VF:1=CUSTOM_PHYSNET_B
> =isolate:1,2,3,4
> =same_subtree:1,2
> =same_subtree:3,4
> 
> We can go even deeper if you'd like, since NFV means "never-ending
> feature velocity". Just let me know.
> 
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Default scheduler filters survey

2018-04-18 Thread Simon Leinen
Artom Lifshitz writes:
> To that end, we'd like to know what filters operators are enabling in
> their deployment. If you can, please reply to this email with your
> [filter_scheduler]/enabled_filters (or
> [DEFAULT]/scheduler_default_filters if you're using an older version)
> option from nova.conf. Any other comments are welcome as well :)

We have the following enabled on our semi-public (academic community)
cloud, which runs on Newton:

AggregateInstanceExtraSpecsFilter
AvailabilityZoneFilter
ComputeCapabilitiesFilter
ComputeFilter
ImagePropertiesFilter
PciPassthroughFilter
RamFilter
RetryFilter
ServerGroupAffinityFilter
ServerGroupAntiAffinityFilter

(sorted alphabetically) Recently we've also been trying

AggregateImagePropertiesIsolation

...but it looks like we'll replace it with our own because it's a bit
awkward to use for our purpose (scheduling Windows instance to licensed
compute nodes).
-- 
Simon.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][infra][qa] Jobs failing; pep8 not found

2018-04-18 Thread Doug Hellmann
Excerpts from Jim Rollenhagen's message of 2018-04-18 13:44:08 -0400:
> Hi all,
> 
> We have a number of stable branch jobs failing[0] with an error about pep8
> not being importable[1], when it's clearly installed[2]. We first saw this
> when installing networking-generic-switch on queens in our multinode
> grenade job. We hacked a fix there[3], as we couldn't figure it out and
> thought it was a fluke. Now it's showing up elsewhere.
> 
> I suspected a new pycodestyle was the culprit (maybe it kills off the pep8
> package somehow?) but pinning pycodestyle back a version didn't seem to
> help.
> 
> Any ideas what might be going on here? I'm completely lost.
> 
> P.S. if anyone has the side question of why pep8 is being imported at
> install time, it seems that pbr iterates over any entry points under
> 'distutils.commands' for any installed package. flake8 has one of these
> which must import pep8 to be resolved. I'm not sure *why* pbr needs to do
> this, but I'll assume it's necessary.
> 
> [0] https://review.openstack.org/#/c/557441/
> [1]
> http://logs.openstack.org/41/557441/1/gate/ironic-tempest-dsvm-ironic-inspector-queens/5a4a6c9/logs/devstacklog.txt.gz#_2018-04-16_15_48_01_508
> [2]
> http://logs.openstack.org/41/557441/1/gate/ironic-tempest-dsvm-ironic-inspector-queens/5a4a6c9/logs/devstacklog.txt.gz#_2018-04-16_15_47_40_822
> [3] https://review.openstack.org/#/c/561358/
> 
> // jim

This looks like some behavior that has been pulled out as part of pbr 4
(version 3 is being used in the stable branch). Perhaps we want to
update the pbr constraint there to use the newer version?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] z/VM introducing a new config driveformat

2018-04-18 Thread Dan Smith
> Having briefly read the cloud-init snippet which was linked earlier in
> this thread, the requirement seems to be that the guest exposes the
> device as /dev/srX or /dev/cdX. So I guess in order to make this work:
>
> * You need to tell z/VM to expose the virtual disk as an optical disk
> * The z/VM kernel needs to call optical disks /dev/srX or /dev/cdX

According to the docs, it doesn't need to be. You can indicate the
configdrive via filesystem label which makes sense given we support vfat
for it as well.

http://cloudinit.readthedocs.io/en/latest/topics/datasources/configdrive.html#version-2

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Concern about trusted certificates API change

2018-04-18 Thread Dan Smith
> Maybe it wasn't clear but I'm not advocating that we block the change
> until volume-backed instances are supported with trusted certs. I'm
> suggesting we add a policy rule which allows deployers to at least
> disable it via policy if it's not supported for their cloud.

That's fine with me, and provides an out for another issue I pointed out
on the code review. Basically, the operator has no way to disable this
feature. If they haven't set this up properly and have no desire to, a
user reading the API spec and passing trusted certs will not be able to
boot an instance and not really understand why.

> I agree. I'm the one that noticed the issue and pointed out in the
> code review that we should explicitly fail the request if we can't
> honor it.

I agree for the moment for sure, but it would obviously be nice not to
open another gap we're not going to close. There's no reason this can't
be supported for volume-backed instances, it just requires some help
from cinder.

I would think that it'd be nice if we could declare the "can't do this
for reasons" response as a valid one regardless of the cause so we don't
need another microversion for the future where volume-backed instances
can do this.

> Again, I'm not advocating that we block until boot from volume is
> supported. However, we have a lot of technical debt for "good
> functionality" added over the years that failed to consider
> volume-backed instances, like rebuild, rescue, backup, etc and it's
> painful to deal with that after the fact, as can be seen from the
> various specs proposed for adding that support to those APIs.

Totes agree.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Matt Riedemann

On 4/18/2018 12:40 PM, Jay Pipes wrote:
We can go even deeper if you'd like, since NFV means "never-ending 
feature velocity". Just let me know.


Cool. So let's not use a GET for this and instead change it to a POST 
with a request body that can more cleanly describe what the user is 
requesting, which is something we talked about a long time ago.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][trove] keep trove-stable-maint members up-to-date

2018-04-18 Thread Matt Riedemann

On 4/17/2018 8:49 PM, 赵超 wrote:
Thanks for approving the stable branch patches of trove and 
python-trove, we also have some in the trove-dashboard.


I also went through the trove-dashboard ones, just need another 
stable-maint-core to approve those.


https://review.openstack.org/#/q/project:openstack/trove-dashboard+status:open+NOT+branch:master

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][infra][qa] Jobs failing; pep8 not found

2018-04-18 Thread Jim Rollenhagen
Hi all,

We have a number of stable branch jobs failing[0] with an error about pep8
not being importable[1], when it's clearly installed[2]. We first saw this
when installing networking-generic-switch on queens in our multinode
grenade job. We hacked a fix there[3], as we couldn't figure it out and
thought it was a fluke. Now it's showing up elsewhere.

I suspected a new pycodestyle was the culprit (maybe it kills off the pep8
package somehow?) but pinning pycodestyle back a version didn't seem to
help.

Any ideas what might be going on here? I'm completely lost.

P.S. if anyone has the side question of why pep8 is being imported at
install time, it seems that pbr iterates over any entry points under
'distutils.commands' for any installed package. flake8 has one of these
which must import pep8 to be resolved. I'm not sure *why* pbr needs to do
this, but I'll assume it's necessary.

[0] https://review.openstack.org/#/c/557441/
[1]
http://logs.openstack.org/41/557441/1/gate/ironic-tempest-dsvm-ironic-inspector-queens/5a4a6c9/logs/devstacklog.txt.gz#_2018-04-16_15_48_01_508
[2]
http://logs.openstack.org/41/557441/1/gate/ironic-tempest-dsvm-ironic-inspector-queens/5a4a6c9/logs/devstacklog.txt.gz#_2018-04-16_15_47_40_822
[3] https://review.openstack.org/#/c/561358/

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Jay Pipes

On 04/18/2018 11:58 AM, Matt Riedemann wrote:

On 4/18/2018 9:06 AM, Jay Pipes wrote:
"By default, should resources/traits submitted in different numbered 
request groups be supplied by separate resource providers?"


Without knowing all of the hairy use cases, I'm trying to channel my 
inner sdague and some of the similar types of discussions we've had to 
changes in the compute API, and a lot of the time we've agreed that we 
shouldn't assume a default in certain cases.


So for this case, if I'm requesting numbered request groups, why doesn't 
the API just require that I pass a query parameter telling it how I'd 
like those requests to be handled, either via affinity or anti-affinity

So, you're thinking maybe something like this?

1) Get me two dedicated CPUs. One of those dedicated CPUs must have AVX2 
capabilities. They must be on different child providers (different NUMA 
cells that are providing those dedicated CPUs).


GET /allocation_candidates?

 resources1=PCPU:1=HW_CPU_X86_AVX2
=PCPU:1
=isolate:1,2

2) Get me four dedicated CPUs. Two of those dedicated CPUs must have 
AVX2 capabilities. Two of the dedicated CPUs must have the SSE 4.2 
capability. They may come from the same provider (NUMA cell) or 
different providers.


GET /allocation_candidates?

 resources1=PCPU:2=HW_CPU_X86_AVX2
=PCPU:2=HW_CPU_X86_SSE42
=any:1,2

3) Get me 2 dedicated CPUs and 2 SR-IOV VFs. The VFs must be provided by 
separate physical function providers which have different traits marking 
separate physical networks. The dedicated CPUs must come from the same 
provider tree in which the physical function providers reside.


GET /allocation_candidates?

 resources1=PCPU:2
=SRIOV_NET_VF:1=CUSTOM_PHYSNET_A
=SRIOV_NET_VF:1=CUSTOM_PHYSNET_B
=isolate:2,3
=same_tree:1,2,3

3) Get me 2 dedicated CPUs and 2 SR-IOV VFs. The VFs must be provided by 
separate physical function providers which have different traits marking 
separate physical networks. The dedicated CPUs must come from the same 
provider *subtree* in which the second group of VF resources are sourced.


GET /allocation_candidates?

 resources1=PCPU:2
=SRIOV_NET_VF:1=CUSTOM_PHYSNET_A
=SRIOV_NET_VF:1=CUSTOM_PHYSNET_B
=isolate:2,3
=same_subtree:1,3

4) Get me 4 SR-IOV VFs. 2 VFs should be sourced from a provider that is 
decorated with the CUSTOM_PHYSNET_A trait. 2 VFs should be sourced from 
a provider that is decorated with the CUSTOM_PHYSNET_B trait. For HA 
purposes, none of the VFs should be sourced from the same provider. 
However, the VFs for each physical network should be within the same 
subtree (NUMA cell) as each other.


GET /allocation_candidates?

 resources1=SRIOV_NET_VF:1=CUSTOM_PHYSNET_A
=SRIOV_NET_VF:1=CUSTOM_PHYSNET_A
=SRIOV_NET_VF:1=CUSTOM_PHYSNET_B
=SRIOV_NET_VF:1=CUSTOM_PHYSNET_B
=isolate:1,2,3,4
=same_subtree:1,2
=same_subtree:3,4

We can go even deeper if you'd like, since NFV means "never-ending 
feature velocity". Just let me know.


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Default scheduler filters survey

2018-04-18 Thread Jonathan D. Proulx
On Wed, Apr 18, 2018 at 05:20:13PM +, Tim Bell wrote:
:I'd suggest asking on the openstack-operators list since there is only a 
subset of operators who follow openstack-dev.

I'd second that, which I'm (obviously) subscribed to both I do pay more
attention to operators, and almost missed this ask.

but here's mine:

scheduler_default_filters=ComputeFilter,AggregateInstanceExtraSpecsFilter,AggregateCoreFilter,AggregateRamFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,ImagePropertiesFilter,PciPassthroughFilter

:Tim
:
:-Original Message-
:From: Chris Friesen 
:Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

:Date: Wednesday, 18 April 2018 at 18:34
:To: "openstack-dev@lists.openstack.org" 
:Subject: Re: [openstack-dev] [nova] Default scheduler filters survey
:
:On 04/18/2018 09:17 AM, Artom Lifshitz wrote:
:
:> To that end, we'd like to know what filters operators are enabling in
:> their deployment. If you can, please reply to this email with your
:> [filter_scheduler]/enabled_filters (or
:> [DEFAULT]/scheduler_default_filters if you're using an older version)
:> option from nova.conf. Any other comments are welcome as well :)
:
:RetryFilter
:ComputeFilter
:AvailabilityZoneFilter
:AggregateInstanceExtraSpecsFilter
:ComputeCapabilitiesFilter
:ImagePropertiesFilter
:NUMATopologyFilter
:ServerGroupAffinityFilter
:ServerGroupAntiAffinityFilter
:PciPassthroughFilter
:
:
:__
:OpenStack Development Mailing List (not for usage questions)
:Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
:
:
:__
:OpenStack Development Mailing List (not for usage questions)
:Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Default scheduler filters survey

2018-04-18 Thread Tim Bell
I'd suggest asking on the openstack-operators list since there is only a subset 
of operators who follow openstack-dev.

Tim

-Original Message-
From: Chris Friesen 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 18 April 2018 at 18:34
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [nova] Default scheduler filters survey

On 04/18/2018 09:17 AM, Artom Lifshitz wrote:

> To that end, we'd like to know what filters operators are enabling in
> their deployment. If you can, please reply to this email with your
> [filter_scheduler]/enabled_filters (or
> [DEFAULT]/scheduler_default_filters if you're using an older version)
> option from nova.conf. Any other comments are welcome as well :)

RetryFilter
ComputeFilter
AvailabilityZoneFilter
AggregateInstanceExtraSpecsFilter
ComputeCapabilitiesFilter
ImagePropertiesFilter
NUMATopologyFilter
ServerGroupAffinityFilter
ServerGroupAntiAffinityFilter
PciPassthroughFilter


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Concern about trusted certificates API change

2018-04-18 Thread Jay Pipes

On 04/18/2018 01:14 PM, Matt Riedemann wrote:

On 4/18/2018 12:09 PM, Chris Friesen wrote:
If this happens, is it clear to the end-user that the reason the boot 
failed is that the cloud doesn't support trusted cert IDs for 
boot-from-vol?  If so, then I think that's totally fine.


If you're creating an image-backed server and requesting specific 
trusted certs, you'll get by the API but could land on a compute host 
that doesn't support image validation, like any non-libvirt driver, and 
at that point the trusted certs request is ignored.


We could fix that the same way I've proposed we fix it for boot from 
volume with multiattach volumes in that the compute node resource 
provider would have a trait on it for the capability, and we'd add a 
placement request filter that detects, from the RequestSpec, that you're 
trying to do this specific thing that requires a compute that supports 
that capability, otherwise you get NoValidHost.


+1

Still looking for reviews on https://review.openstack.org/#/c/546713/.

Thanks,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Concern about trusted certificates API change

2018-04-18 Thread Matt Riedemann

On 4/18/2018 12:09 PM, Chris Friesen wrote:
If this happens, is it clear to the end-user that the reason the boot 
failed is that the cloud doesn't support trusted cert IDs for 
boot-from-vol?  If so, then I think that's totally fine.


If you're creating an image-backed server and requesting specific 
trusted certs, you'll get by the API but could land on a compute host 
that doesn't support image validation, like any non-libvirt driver, and 
at that point the trusted certs request is ignored.


We could fix that the same way I've proposed we fix it for boot from 
volume with multiattach volumes in that the compute node resource 
provider would have a trait on it for the capability, and we'd add a 
placement request filter that detects, from the RequestSpec, that you're 
trying to do this specific thing that requires a compute that supports 
that capability, otherwise you get NoValidHost.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Concern about trusted certificates API change

2018-04-18 Thread Matt Riedemann

On 4/18/2018 11:57 AM, Jay Pipes wrote:
There is a compute REST API change proposed [1] which will allow users 
to pass trusted certificate IDs to be used with validation of images 
when creating or rebuilding a server. The trusted cert IDs are based 
on certificates stored in some key manager, e.g. Barbican.


The full nova spec is here [2].

The main concern I have is that trusted certs will not be supported 
for volume-backed instances, and some clouds only support 
volume-backed instances.


Yes. And some clouds only support VMWare vCenter virt driver. And some 
only support Hyper-V. I don't believe we should delay adding good 
functionality to (large percentage of) clouds because it doesn't yet 
work with one virt driver or one piece of (badly-designed) functionality.


Maybe it wasn't clear but I'm not advocating that we block the change 
until volume-backed instances are supported with trusted certs. I'm 
suggesting we add a policy rule which allows deployers to at least 
disable it via policy if it's not supported for their cloud.



 > The way the patch is written is that if the user attempts to

boot from volume with trusted certs, it will fail.


And... I think that's perfectly fine.


I agree. I'm the one that noticed the issue and pointed out in the code 
review that we should explicitly fail the request if we can't honor it.




In thinking about a semi-discoverable/configurable solution, I'm 
thinking we should add a policy rule around trusted certs to indicate 
if they can be used or not. Beyond the boot from volume issue, the 
only virt driver that supports trusted cert image validation is the 
libvirt driver, so any cloud that's not using the libvirt driver 
simply cannot support this feature, regardless of boot from volume. We 
have added similar policy rules in the past for backend-dependent 
features like volume extend and volume multi-attach, so I don't think 
this is a new issue.


Alternatively we can block the change in nova until it supports boot 
from volume, but that would mean needing to add trusted cert image 
validation support into cinder along with API changes, effectively 
killing the chance of this getting done in nova in Rocky, and this 
blueprint has been around since at least Ocata so it would be good to 
make progress if possible.


As mentioned above, I don't want to derail progress until (if ever?) 
trusted certs achieves this magical 
works-for-every-driver-and-functionality state. It's not realistic to 
expect this to be done, IMHO, and just keeps good functionality out of 
the hands of many cloud users.


Again, I'm not advocating that we block until boot from volume is 
supported. However, we have a lot of technical debt for "good 
functionality" added over the years that failed to consider 
volume-backed instances, like rebuild, rescue, backup, etc and it's 
painful to deal with that after the fact, as can be seen from the 
various specs proposed for adding that support to those APIs.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Concern about trusted certificates API change

2018-04-18 Thread Chris Friesen

On 04/18/2018 10:57 AM, Jay Pipes wrote:

On 04/18/2018 12:41 PM, Matt Riedemann wrote:

There is a compute REST API change proposed [1] which will allow users to pass
trusted certificate IDs to be used with validation of images when creating or
rebuilding a server. The trusted cert IDs are based on certificates stored in
some key manager, e.g. Barbican.

The full nova spec is here [2].

The main concern I have is that trusted certs will not be supported for
volume-backed instances, and some clouds only support volume-backed instances.


Yes. And some clouds only support VMWare vCenter virt driver. And some only
support Hyper-V. I don't believe we should delay adding good functionality to
(large percentage of) clouds because it doesn't yet work with one virt driver or
one piece of (badly-designed) functionality.

 > The way the patch is written is that if the user attempts to

boot from volume with trusted certs, it will fail.


And... I think that's perfectly fine.


If this happens, is it clear to the end-user that the reason the boot failed is 
that the cloud doesn't support trusted cert IDs for boot-from-vol?  If so, then 
I think that's totally fine.


If the error message is unclear, then maybe we should just improve it.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Concern about trusted certificates API change

2018-04-18 Thread Jay Pipes

On 04/18/2018 12:41 PM, Matt Riedemann wrote:
There is a compute REST API change proposed [1] which will allow users 
to pass trusted certificate IDs to be used with validation of images 
when creating or rebuilding a server. The trusted cert IDs are based on 
certificates stored in some key manager, e.g. Barbican.


The full nova spec is here [2].

The main concern I have is that trusted certs will not be supported for 
volume-backed instances, and some clouds only support volume-backed 
instances.


Yes. And some clouds only support VMWare vCenter virt driver. And some 
only support Hyper-V. I don't believe we should delay adding good 
functionality to (large percentage of) clouds because it doesn't yet 
work with one virt driver or one piece of (badly-designed) functionality.


> The way the patch is written is that if the user attempts to

boot from volume with trusted certs, it will fail.


And... I think that's perfectly fine.

In thinking about a semi-discoverable/configurable solution, I'm 
thinking we should add a policy rule around trusted certs to indicate if 
they can be used or not. Beyond the boot from volume issue, the only 
virt driver that supports trusted cert image validation is the libvirt 
driver, so any cloud that's not using the libvirt driver simply cannot 
support this feature, regardless of boot from volume. We have added 
similar policy rules in the past for backend-dependent features like 
volume extend and volume multi-attach, so I don't think this is a new 
issue.


Alternatively we can block the change in nova until it supports boot 
from volume, but that would mean needing to add trusted cert image 
validation support into cinder along with API changes, effectively 
killing the chance of this getting done in nova in Rocky, and this 
blueprint has been around since at least Ocata so it would be good to 
make progress if possible.


As mentioned above, I don't want to derail progress until (if ever?) 
trusted certs achieves this magical 
works-for-every-driver-and-functionality state. It's not realistic to 
expect this to be done, IMHO, and just keeps good functionality out of 
the hands of many cloud users.


Just my 2 cents.
-jay


[1] https://review.openstack.org/#/c/486204/
[2] 
https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/nova-validate-certificates.html 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI & Tempest squad planning summary: Sprint 12

2018-04-18 Thread Matt Young
Greetings,

The TripleO CI & Tempest squads have begun work on Sprint 12.  This is a 3
week sprint.

The Ruck & Rover for this sprint are quiquell and panda.

## CI Squad

Goals:

"As a developer, I want reproduce a multinode CI job on a bare metal host
using libvirt"
"Enable the same workflows used in upstream CI / reproducer using libvirt
instead of OVB"

Epic:  https://trello.com/c/JEGLSVh6/323-reproduce-ci-jobs-with-libvirt
Tasks: https://tinyurl.com/yd93nz8p

## Tempest Squad

Goals:

"Run tempest on undercloud by using containerized and packaged tempest as
well as against Heat, Mistral, Ironic, Tempest and python-tempestconf
upstream"
"Finish work items carried from sprint 11 or other side work going on."

Epic:  https://trello.com/c/ifIYQsxs/680-sprint-12-undercloud-tempest
Tasks: https://tinyurl.com/y8k6yvbm

For any questions please find us in #tripleo

Thanks,

Matt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Concern about trusted certificates API change

2018-04-18 Thread Matt Riedemann
There is a compute REST API change proposed [1] which will allow users 
to pass trusted certificate IDs to be used with validation of images 
when creating or rebuilding a server. The trusted cert IDs are based on 
certificates stored in some key manager, e.g. Barbican.


The full nova spec is here [2].

The main concern I have is that trusted certs will not be supported for 
volume-backed instances, and some clouds only support volume-backed 
instances. The way the patch is written is that if the user attempts to 
boot from volume with trusted certs, it will fail.


In thinking about a semi-discoverable/configurable solution, I'm 
thinking we should add a policy rule around trusted certs to indicate if 
they can be used or not. Beyond the boot from volume issue, the only 
virt driver that supports trusted cert image validation is the libvirt 
driver, so any cloud that's not using the libvirt driver simply cannot 
support this feature, regardless of boot from volume. We have added 
similar policy rules in the past for backend-dependent features like 
volume extend and volume multi-attach, so I don't think this is a new issue.


Alternatively we can block the change in nova until it supports boot 
from volume, but that would mean needing to add trusted cert image 
validation support into cinder along with API changes, effectively 
killing the chance of this getting done in nova in Rocky, and this 
blueprint has been around since at least Ocata so it would be good to 
make progress if possible.


[1] https://review.openstack.org/#/c/486204/
[2] 
https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/nova-validate-certificates.html


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Chris Dent

On Wed, 18 Apr 2018, Jay Pipes wrote:


Stackers,


Thanks for doing this. Seeing it gathered in one place like this in
neither IRC nor gerrit is way easier on my brain (for whatever
reason, don't know why).

Eric Fried and I are currently at an impasse regarding a decision that will 
have far-reaching (and end-user facing) impacts to the placement API and how 
nova interacts with the placement service from the nova scheduler.


One thing that has felt like it is missing (at least not explicitly
present) in this discussion. We talk about this as if it will have
far reaching consequences, but it is not clear (to me at least) what
those consequences are, other than need to diddle yet more syntax
further down the line. Are there deeper consequences than that?

In Viewpoint B, the proposal is to have a separate_providers=1,2 query 
parameter that would indicate that the identified request groups should be 
sourced from separate providers. Request groups that are not listed in the 
separate_providers query parameter are not guaranteed to be sourced from 
different providers.


Do I recall correctly that part of the motivation here (in viewpoint
B) is to be able to express: I'd like two disparate chunks of the same
class of inventory and while having them come from diffferent
providers it okay, it is also okay if they came from the same?

If that's correct, then that, to me, is fairly compelling if we are
thinking about placement over the long term outside the context of
solely satisfying nova workload placement.

I'm, quite frankly, a bit on the fence about the whole thing and would just 
like to have a clear path forward so that we can start landing the 12+ 
patches that are queued up waiting for a decision on this.


yes

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Default scheduler filters survey

2018-04-18 Thread Chris Friesen

On 04/18/2018 09:17 AM, Artom Lifshitz wrote:


To that end, we'd like to know what filters operators are enabling in
their deployment. If you can, please reply to this email with your
[filter_scheduler]/enabled_filters (or
[DEFAULT]/scheduler_default_filters if you're using an older version)
option from nova.conf. Any other comments are welcome as well :)


RetryFilter
ComputeFilter
AvailabilityZoneFilter
AggregateInstanceExtraSpecsFilter
ComputeCapabilitiesFilter
ImagePropertiesFilter
NUMATopologyFilter
ServerGroupAffinityFilter
ServerGroupAntiAffinityFilter
PciPassthroughFilter


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Ed Leafe
On Apr 18, 2018, at 10:58 AM, Matt Riedemann  wrote:
> 
> So for this case, if I'm requesting numbered request groups, why doesn't the 
> API just require that I pass a query parameter telling it how I'd like those 
> requests to be handled, either via affinity or anti-affinity.

That makes a lot of sense. Since we are already suffixing the query param 
“resources” to indicate granular, why not add a clarifying term to that suffix? 
E.g., “resources1=“ -> “resources1d” (for ‘different’). The exact string we use 
can be bike shedded, but requiring it be specified sounds pretty sane to me.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Ironic Inspector in the overcloud

2018-04-18 Thread Derek Higgins
On 18 April 2018 at 14:22, Bogdan Dobrelya  wrote:

> On 4/18/18 12:07 PM, Derek Higgins wrote:
>
>> Hi All,
>>
>> I've been testing the ironic inspector containerised service in the
>> overcloud, the service essentially works but there is a couple of hurdles
>> to tackle to set it up, the first of these is how to get  the IPA kernel
>> and ramdisk where they need to be.
>>
>> These need to be be present in the ironic_pxe_http container to be served
>> out over http, whats the best way to get them there?
>>
>> On the undercloud this is done by copying the files across the
>> filesystem[1][2] to /httpboot  when we run "openstack overcloud image
>> upload", but on the overcloud an alternative is required, could the files
>> be pulled into the container during setup?
>>
>
> I'd prefer keep bind-mounting IPA kernel and ramdisk into a container via
> the /var/lib/ironic/httpboot host-path. So the question then becomes how to
> deliver those by that path for overcloud nodes?
>
Yup it does, I'm currently looking into using DeployArtifactURLs to
download the files to the controller nodes


>
>
>> thanks,
>> Derek
>>
>> 1 - https://github.com/openstack/python-tripleoclient/blob/3cf44
>> eb/tripleoclient/v1/overcloud_image.py#L421-L433
>> 2 - https://github.com/openstack/python-tripleoclient/blob/3cf44
>> eb/tripleoclient/v1/overcloud_image.py#L181
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Chris Friesen

On 04/18/2018 09:58 AM, Matt Riedemann wrote:

On 4/18/2018 9:06 AM, Jay Pipes wrote:

"By default, should resources/traits submitted in different numbered request
groups be supplied by separate resource providers?"


Without knowing all of the hairy use cases, I'm trying to channel my inner
sdague and some of the similar types of discussions we've had to changes in the
compute API, and a lot of the time we've agreed that we shouldn't assume a
default in certain cases.

So for this case, if I'm requesting numbered request groups, why doesn't the API
just require that I pass a query parameter telling it how I'd like those
requests to be handled, either via affinity or anti-affinity.


The request might get unwieldy if we have to specify affinity/anti-affinity for 
each resource.  Maybe you could specify the default for the request and then 
optionally override it for each resource?


I'm not current on the placement implementation details, but would this level of 
flexibility cause complexity problems in the code?


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Chris Friesen

On 04/18/2018 08:06 AM, Jay Pipes wrote:

Stackers,

Eric Fried and I are currently at an impasse regarding a decision that will have
far-reaching (and end-user facing) impacts to the placement API and how nova
interacts with the placement service from the nova scheduler.

We need to make a decision regarding the following question:

"By default, should resources/traits submitted in different numbered request
groups be supplied by separate resource providers?"


I'm a bit conflicted.  On the one hand if we're talking about virtual resources 
like "vCPUs" then there's really no reason why they couldn't be sourced from the 
same resource provider.


On the other hand, once we're talking about *physical* resources it seems like 
it might be more common to want them to be coming from different resource 
providers.  We may want memory spread across multiple NUMA nodes for higher 
aggregate bandwidth, we may want VFs from separate PFs for high availability.


I'm half tempted to side with mriedem and say that there is no default and it 
must be explicit, but I'm concerned that this would make the requests a lot 
larger if you have to specify it for every resource.  (Will follow up in a reply 
to mriedem's post.)



Both proposals include ways to specify whether certain resources or whole
request groups can be forced to be sources from either a single provider or from
different providers.

In Viewpoint A, the proposal is to have a can_split=RESOURCE1,RESOURCE2 query
parameter that would indicate which resource classes in the unnumbered request
group that may be split across multiple providers (remember that viewpoint A
considers different request groups to explicitly mean different providers, so it
doesn't make sense to have a can_split query parameter for numbered request
groups).



In Viewpoint B, the proposal is to have a separate_providers=1,2 query parameter
that would indicate that the identified request groups should be sourced from
separate providers. Request groups that are not listed in the separate_providers
query parameter are not guaranteed to be sourced from different providers.


In either viewpoint, is there a way to represent "I want two resource groups, 
with resource X in each group coming from different resource providers 
(anti-affinity) and resource Y from the same resource provider (affinity)?


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Rocky forum topics brainstorming

2018-04-18 Thread melanie witt

On Fri, 13 Apr 2018 08:00:31 -0700, Melanie Witt wrote:

+openstack-operators (apologies that I forgot to add originally)

On Mon, 9 Apr 2018 10:09:12 -0700, Melanie Witt wrote:

Hey everyone,

Let's collect forum topic brainstorming ideas for the Forum sessions in
Vancouver in this etherpad [0]. Once we've brainstormed, we'll select
and submit our topic proposals for consideration at the end of this
week. The deadline for submissions is Sunday April 15.

Thanks,
-melanie

[0] https://etherpad.openstack.org/p/YVR-nova-brainstorming


Just a reminder that we're collecting forum topic ideas to propose for
Vancouver and input from operators is especially important. Please add
your topics and/or comments to the etherpad [0] and we'll submit
proposals before the Sunday deadline.


Here's a list of nova-related sessions that have been proposed:

* CellsV2 migration process sync with operators:
  http://forumtopics.openstack.org/cfp/details/125

* nova/neutron + ops cross-project session:
  http://forumtopics.openstack.org/cfp/details/124

* Planning to use Placement in Cinder:
  http://forumtopics.openstack.org/cfp/details/89

* Building the path to extracting Placement from Nova:
  http://forumtopics.openstack.org/cfp/details/88

* Multi-attach introduction and future direction:
  http://forumtopics.openstack.org/cfp/details/101

* Making NFV features easier to use:
  http://forumtopics.openstack.org/cfp/details/146

A list of all proposed forum topics can be seen here:

http://forumtopics.openstack.org

Cheers,
-melanie




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] z/VM introducing a new config driveformat

2018-04-18 Thread Jim Rollenhagen
On Wed, Apr 18, 2018 at 10:56 AM, Matthew Booth  wrote:

>
> > I agree with Mikal that needing more agent behavior than cloud-init does
> > a disservice to the users.
> >
> > I feel like we get a lot of "but no, my hypervisor is special!"
> > reasoning when people go to add a driver to nova. So far, I think
> > they're a lot more similar than people think. Ironic is the weirdest one
> > we have (IMHO and no offense to the ironic folks) and it can support
> > configdrive properly.
>
> I was going to ask this. Even if the contents of the disk can't be
> transferred in advance... how does ironic do this? There must be a
> way.
>

I'm not sure if this is a rhetorical question, so I'll just answer it. :)
We basically build the configdrive in nova-compute, then gzip and base64
it, and send it to ironic with the deploy request. On the ironic side, we
unpack it and write it to the end of the boot disk.

https://github.com/openstack/nova/blob/324899c621ee02d877122ba3412712ebb92831f2/nova/virt/ironic/driver.py#L952-L985


// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Matt Riedemann

On 4/18/2018 9:06 AM, Jay Pipes wrote:
"By default, should resources/traits submitted in different numbered 
request groups be supplied by separate resource providers?"


Without knowing all of the hairy use cases, I'm trying to channel my 
inner sdague and some of the similar types of discussions we've had to 
changes in the compute API, and a lot of the time we've agreed that we 
shouldn't assume a default in certain cases.


So for this case, if I'm requesting numbered request groups, why doesn't 
the API just require that I pass a query parameter telling it how I'd 
like those requests to be handled, either via affinity or anti-affinity.


I'm specifically thinking about the changes to the compute API in 
microversion 2.37 for get-me-a-network where my initial design was to 
allow the 'networks' entry in the POST /servers request to remain 
optional and default to auto-allocate, but without going into details, 
that could be a problem. So ultimately we just decided that with >=2.37 
you have to specify "networks" in POST /servers and we provided specific 
values for what the networks should be (specific network ID, port ID, 
auto or none). That way the user knows exactly what they are opting into 
rather than rely on default behavior in the server, which might bite you 
(or us) later if we ever want to change that default behavior.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Wednesday, April 18, 2018 3:39 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [placement][nova] Decision time on
> granular request groups for like resources
> 
> On 04/18/2018 10:30 AM, Eric Fried wrote:
> > Thanks for describing the proposals clearly and concisely, Jay.
> >
> > My preamble would have been that we need to support two use cases:
> >
> > - "explicit anti-affinity": make sure certain parts of my request
> land
> > on *different* providers;
> > - "any fit": make sure my instance lands *somewhere*.
> >
[Mooney, Sean K] for completeness we must also support explicit affinity also
So the tree cases are 
"explicit anti-affinity": make sure certain parts of my request land 
  on *different* providers in the same tree
  think VFs for bonded ports.
"explicit affinity":  make sure certain parts of my request land 
  on the same providers in the same tree.
  This is the numa affinity case for ram and cpus.
"any fit":make sure my instance lands *somewhere* with in
  the same tree.

We have to also be aware of the implication for sharing resource providers here
Too as with jays approach you cannot mix shared and non-shared in a request
Numbered request group. With eric's proposal I believe you can have allocation 
within
A numbered request group come from sharing providers and local providers 
assuming you
Do not use traits to confine that behavior. 
 
> > Both proposals address both use cases, but in different ways.
> 
> Right.
> 
> It's important to point out when we say "different providers" in this
> ML post, we are specifically referring to different providers *within a
> tree of providers*. We are not referring to completely separate compute
> hosts. We are referring to things like multiple NUMA cells that expose
> CPU resources on a single compute host or multiple SR-IOV-enabled
> physical functions that expose SR-IOV VFs for use by guests.
> 
> Best.
> -jay
> 
> >> "By default, should resources/traits submitted in different numbered
> >> request groups be supplied by separate resource providers?"
> >
> > I agree this question needs to be answered, but that won't
> necessarily
> > inform which path we choose.  Viewpoint B [3] is set up to go either
> > way: either we're unrestricted by default and use a queryparam to
> > force separation; or we're split by default and use a queryparam to
> > allow the unrestricted behavior.
> >
> > Otherwise I agree with everything Jay said.
> >
> > -efried
> >
> > On 04/18/2018 09:06 AM, Jay Pipes wrote:
> >> Stackers,
> >>
> >> Eric Fried and I are currently at an impasse regarding a decision
> >> that will have far-reaching (and end-user facing) impacts to the
> >> placement API and how nova interacts with the placement service from
> >> the nova scheduler.
> >>
> >> We need to make a decision regarding the following question:
> >>
> >>
> >> There are two competing proposals right now (both being amendments
> to
> >> the original granular request groups spec [1]) which outline two
> >> different viewpoints.
> >>
> >> Viewpoint A [2], from me, is that like resources listed in different
> >> granular request groups should mean that those resources will be
> >> sourced from *different* resource providers.
> >>
> >> In other words, if I issue the following request:
> >>
> >> GET /allocation_candidates?resources1=VCPU:1=VCPU:1
> >>
> >> Then I am assured of getting allocation candidates that contain 2
> >> distinct resource providers consuming 1 VCPU from each provider.
> >>
> >> Viewpoint B [3], from Eric, is that like resources listed in
> >> different granular request groups should not necessarily mean that
> >> those resources will be sourced from different resource providers.
> >> They *could* be sourced from different providers, or they could be
> >> sourced from the same provider.
> >>
> >> Both proposals include ways to specify whether certain resources or
> >> whole request groups can be forced to be sources from either a
> single
> >> provider or from different providers.
> >>
> >> In Viewpoint A, the proposal is to have a
> >> can_split=RESOURCE1,RESOURCE2 query parameter that would indicate
> >> which resource classes in the unnumbered request group that may be
> >> split across multiple providers (remember that viewpoint A considers
> >> different request groups to explicitly mean different providers, so
> >> it doesn't make sense to have a can_split query parameter for
> numbered request groups).
> >>
> >> In Viewpoint B, the proposal is to have a separate_providers=1,2
> >> query parameter that would indicate that the identified request
> >> groups should be sourced from separate providers. Request groups
> that
> >> are not listed in the separate_providers query parameter are not
> >> guaranteed to be 

Re: [openstack-dev] [openstack-infra] How to take over a project?

2018-04-18 Thread Ian Y. Choi

Hello Sangho,

When I see 
https://review.openstack.org/#/admin/projects/openstack/networking-onos,access 
page,
it seems that networking-onos-release group members can create stable 
branches for the repository.


By the way, since the networking-onos-release group has no neutron 
release team group,
I think infra team can help to include neutron release team and neutron 
release team can help to create branches
for the repo if there is no reponse from current networking-onos-release 
group member.



Might this help you?


With many thanks,

/Ian

Sangho Shin wrote on 4/18/2018 2:48 PM:

Hello, Ian

I am trying to add a new stable branch in the networking-onos, 
following the page you suggested.



Create stable/* Branch¶



For OpenStack projects this should be performed by the OpenStack 
Release Management Team at the Release Branch Point. If you are 
managing branches for your project you may have permission to do this 
yourself.


  * Go to https://review.openstack.org/ and sign in
  * Select ‘Admin’, ‘Projects’, then the project
  * Select ‘Branches’
  * Enter |stable/| in the ‘Branch Name’ field, and |HEAD| as
the ‘Initial Revision’, then press ‘Create Branch’. Alternatively,
you may run |git branch stable/  && git push gerrit
stable/|


However, after I login, I cannot see the ‘Admin’ and also I cannot 
create a new branch. Do I need an additional authority for it?

BTW, I am a member of networking-onos-core team, as you know.

Thank you,

Sangho



On 18 Apr 2018, at 9:00 AM, Sangho Shin > wrote:


Ian and Gary,

Thank you so much for your answer.
I will try what you suggested.

Thank you,

Sangho

On 17 Apr 2018, at 7:47 PM, Gary Kotton > wrote:


Hi,
You either need one of the ono core team or the neutron release team 
to add you. FYI 
-https://review.openstack.org/#/admin/groups/1001,members

Thanks
Gary
*From:*Sangho Shin >
*Reply-To:*OpenStack List >

*Date:*Tuesday, April 17, 2018 at 5:01 AM
*To:*OpenStack List >

*Subject:*[openstack-dev] [openstack-infra] How to take over a project?
Dear OpenStack Infra team,
I would like to know how to take over an OpenStack project.
I am a committer of the networking-onos project 
(https://github.com/openstack/networking-onos), 
and I would like to take over the project.

The current maintainer (cc’d) has already agreed with that.
Please let me know the process to take over (or change the 
maintainer of) the project.
BTW, it looks like even the current maintainer cannot create a new 
branch of the codes. How can we get the authority to create a new 
branch?

Thank you,
Sangho
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org 
?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Default scheduler filters survey

2018-04-18 Thread Artom Lifshitz
Hi all,

A CI issue [1] caused by tempest thinking some filters are enabled
when they're really not, and a proposed patch [2] to add
(Same|Different)HostFilter to the default filters as a workaround, has
led to a discussion about what filters should be enabled by default in
nova.

The default filters should make sense for a majority of real world
deployments. Adding some filters to the defaults because CI needs them
is faulty logic, because the needs of CI are different to the needs of
operators/users, and the latter takes priority (though it's my
understanding that a good chunk of operators run tempest on their
clouds post-deployment as a way to validate that the cloud is working
properly, so maybe CI's and users' needs aren't that different after
all).

To that end, we'd like to know what filters operators are enabling in
their deployment. If you can, please reply to this email with your
[filter_scheduler]/enabled_filters (or
[DEFAULT]/scheduler_default_filters if you're using an older version)
option from nova.conf. Any other comments are welcome as well :)

Cheers!

[1] https://bugs.launchpad.net/tempest/+bug/1628443
[2] https://review.openstack.org/#/c/561651/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] z/VM introducing a new config driveformat

2018-04-18 Thread Matthew Booth
On 18 April 2018 at 15:10, Dan Smith  wrote:
>> Thanks for the concern and fully under it , the major reason is
>> cloud-init doesn't have a hook or plugin before it start to read
>> config drive (ISO disk) z/VM is an old hypervisor and no way to do
>> something like libvirt to define a ISO format disk in xml definition,
>> instead, it can define disks in the defintion of virtual machine and
>> let VM to decide its format.
>>
>> so we need a way to tell cloud-init where to find ISO file before
>> cloud-init start but without AE, we can't handle that...some update on
>> the spec here for further information
>> https://review.openstack.org/#/c/562154/
>
> The ISO format does not come from telling libvirt something about
> it. The host creates and formats the image, adds the data, and then
> attaches it to the instance. The latter part is the only step that
> involves configuring libvirt to attach the image to the instance. The
> rest is just stuff done by nova-compute (and the virt driver) on the
> linux system it's running on. That's the same arrangement as your
> driver, AFAICT.
>
> You're asking the system to hypervisor (or something running on it) to
> grab the image from glance, pre-filled with data. This is no different,
> except that the configdrive image comes from the system running the
> compute service. I don't see how it's any different in actual hypervisor
> mechanics, and thus feel like there _has_ to be a way to do this without
> the AE magic agent.

Having briefly read the cloud-init snippet which was linked earlier in
this thread, the requirement seems to be that the guest exposes the
device as /dev/srX or /dev/cdX. So I guess in order to make this work:

* You need to tell z/VM to expose the virtual disk as an optical disk
* The z/VM kernel needs to call optical disks /dev/srX or /dev/cdX

> I agree with Mikal that needing more agent behavior than cloud-init does
> a disservice to the users.
>
> I feel like we get a lot of "but no, my hypervisor is special!"
> reasoning when people go to add a driver to nova. So far, I think
> they're a lot more similar than people think. Ironic is the weirdest one
> we have (IMHO and no offense to the ironic folks) and it can support
> configdrive properly.

I was going to ask this. Even if the contents of the disk can't be
transferred in advance... how does ironic do this? There must be a
way.

Matt
-- 
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG

Phone: +442070094448 (UK)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Jay Pipes

On 04/18/2018 10:30 AM, Eric Fried wrote:

Thanks for describing the proposals clearly and concisely, Jay.

My preamble would have been that we need to support two use cases:

- "explicit anti-affinity": make sure certain parts of my request land
on *different* providers;
- "any fit": make sure my instance lands *somewhere*.

Both proposals address both use cases, but in different ways.


Right.

It's important to point out when we say "different providers" in this ML 
post, we are specifically referring to different providers *within a 
tree of providers*. We are not referring to completely separate compute 
hosts. We are referring to things like multiple NUMA cells that expose 
CPU resources on a single compute host or multiple SR-IOV-enabled 
physical functions that expose SR-IOV VFs for use by guests.


Best.
-jay


"By default, should resources/traits submitted in different numbered
request groups be supplied by separate resource providers?"


I agree this question needs to be answered, but that won't necessarily
inform which path we choose.  Viewpoint B [3] is set up to go either
way: either we're unrestricted by default and use a queryparam to force
separation; or we're split by default and use a queryparam to allow the
unrestricted behavior.

Otherwise I agree with everything Jay said.

-efried

On 04/18/2018 09:06 AM, Jay Pipes wrote:

Stackers,

Eric Fried and I are currently at an impasse regarding a decision that
will have far-reaching (and end-user facing) impacts to the placement
API and how nova interacts with the placement service from the nova
scheduler.

We need to make a decision regarding the following question:


There are two competing proposals right now (both being amendments to
the original granular request groups spec [1]) which outline two
different viewpoints.

Viewpoint A [2], from me, is that like resources listed in different
granular request groups should mean that those resources will be sourced
from *different* resource providers.

In other words, if I issue the following request:

GET /allocation_candidates?resources1=VCPU:1=VCPU:1

Then I am assured of getting allocation candidates that contain 2
distinct resource providers consuming 1 VCPU from each provider.

Viewpoint B [3], from Eric, is that like resources listed in different
granular request groups should not necessarily mean that those resources
will be sourced from different resource providers. They *could* be
sourced from different providers, or they could be sourced from the same
provider.

Both proposals include ways to specify whether certain resources or
whole request groups can be forced to be sources from either a single
provider or from different providers.

In Viewpoint A, the proposal is to have a can_split=RESOURCE1,RESOURCE2
query parameter that would indicate which resource classes in the
unnumbered request group that may be split across multiple providers
(remember that viewpoint A considers different request groups to
explicitly mean different providers, so it doesn't make sense to have a
can_split query parameter for numbered request groups).

In Viewpoint B, the proposal is to have a separate_providers=1,2 query
parameter that would indicate that the identified request groups should
be sourced from separate providers. Request groups that are not listed
in the separate_providers query parameter are not guaranteed to be
sourced from different providers.

I know this is a complex subject, but I thought it was worthwhile trying
to explain the two proposals in as clear terms as I could muster.

I'm, quite frankly, a bit on the fence about the whole thing and would
just like to have a clear path forward so that we can start landing the
12+ patches that are queued up waiting for a decision on this.

Thoughts and opinions welcome.

Thanks,
-jay


[1]
http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/granular-resource-requests.html


[2] https://review.openstack.org/#/c/560974/

[3] https://review.openstack.org/#/c/561717/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Eric Fried
Thanks for describing the proposals clearly and concisely, Jay.

My preamble would have been that we need to support two use cases:

- "explicit anti-affinity": make sure certain parts of my request land
on *different* providers;
- "any fit": make sure my instance lands *somewhere*.

Both proposals address both use cases, but in different ways.

> "By default, should resources/traits submitted in different numbered
> request groups be supplied by separate resource providers?"

I agree this question needs to be answered, but that won't necessarily
inform which path we choose.  Viewpoint B [3] is set up to go either
way: either we're unrestricted by default and use a queryparam to force
separation; or we're split by default and use a queryparam to allow the
unrestricted behavior.

Otherwise I agree with everything Jay said.

-efried

On 04/18/2018 09:06 AM, Jay Pipes wrote:
> Stackers,
> 
> Eric Fried and I are currently at an impasse regarding a decision that
> will have far-reaching (and end-user facing) impacts to the placement
> API and how nova interacts with the placement service from the nova
> scheduler.
> 
> We need to make a decision regarding the following question:
> 
> 
> There are two competing proposals right now (both being amendments to
> the original granular request groups spec [1]) which outline two
> different viewpoints.
> 
> Viewpoint A [2], from me, is that like resources listed in different
> granular request groups should mean that those resources will be sourced
> from *different* resource providers.
> 
> In other words, if I issue the following request:
> 
> GET /allocation_candidates?resources1=VCPU:1=VCPU:1
> 
> Then I am assured of getting allocation candidates that contain 2
> distinct resource providers consuming 1 VCPU from each provider.
> 
> Viewpoint B [3], from Eric, is that like resources listed in different
> granular request groups should not necessarily mean that those resources
> will be sourced from different resource providers. They *could* be
> sourced from different providers, or they could be sourced from the same
> provider.
> 
> Both proposals include ways to specify whether certain resources or
> whole request groups can be forced to be sources from either a single
> provider or from different providers.
> 
> In Viewpoint A, the proposal is to have a can_split=RESOURCE1,RESOURCE2
> query parameter that would indicate which resource classes in the
> unnumbered request group that may be split across multiple providers
> (remember that viewpoint A considers different request groups to
> explicitly mean different providers, so it doesn't make sense to have a
> can_split query parameter for numbered request groups).
> 
> In Viewpoint B, the proposal is to have a separate_providers=1,2 query
> parameter that would indicate that the identified request groups should
> be sourced from separate providers. Request groups that are not listed
> in the separate_providers query parameter are not guaranteed to be
> sourced from different providers.
> 
> I know this is a complex subject, but I thought it was worthwhile trying
> to explain the two proposals in as clear terms as I could muster.
> 
> I'm, quite frankly, a bit on the fence about the whole thing and would
> just like to have a clear path forward so that we can start landing the
> 12+ patches that are queued up waiting for a decision on this.
> 
> Thoughts and opinions welcome.
> 
> Thanks,
> -jay
> 
> 
> [1]
> http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/granular-resource-requests.html
> 
> 
> [2] https://review.openstack.org/#/c/560974/
> 
> [3] https://review.openstack.org/#/c/561717/
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] z/VM introducing a new config driveformat

2018-04-18 Thread Dan Smith
> Thanks for the concern and fully under it , the major reason is
> cloud-init doesn't have a hook or plugin before it start to read
> config drive (ISO disk) z/VM is an old hypervisor and no way to do
> something like libvirt to define a ISO format disk in xml definition,
> instead, it can define disks in the defintion of virtual machine and
> let VM to decide its format.
>
> so we need a way to tell cloud-init where to find ISO file before
> cloud-init start but without AE, we can't handle that...some update on
> the spec here for further information
> https://review.openstack.org/#/c/562154/

The ISO format does not come from telling libvirt something about
it. The host creates and formats the image, adds the data, and then
attaches it to the instance. The latter part is the only step that
involves configuring libvirt to attach the image to the instance. The
rest is just stuff done by nova-compute (and the virt driver) on the
linux system it's running on. That's the same arrangement as your
driver, AFAICT.

You're asking the system to hypervisor (or something running on it) to
grab the image from glance, pre-filled with data. This is no different,
except that the configdrive image comes from the system running the
compute service. I don't see how it's any different in actual hypervisor
mechanics, and thus feel like there _has_ to be a way to do this without
the AE magic agent.

I agree with Mikal that needing more agent behavior than cloud-init does
a disservice to the users.

I feel like we get a lot of "but no, my hypervisor is special!"
reasoning when people go to add a driver to nova. So far, I think
they're a lot more similar than people think. Ironic is the weirdest one
we have (IMHO and no offense to the ironic folks) and it can support
configdrive properly.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Jay Pipes

Stackers,

Eric Fried and I are currently at an impasse regarding a decision that 
will have far-reaching (and end-user facing) impacts to the placement 
API and how nova interacts with the placement service from the nova 
scheduler.


We need to make a decision regarding the following question:

"By default, should resources/traits submitted in different numbered 
request groups be supplied by separate resource providers?"


There are two competing proposals right now (both being amendments to 
the original granular request groups spec [1]) which outline two 
different viewpoints.


Viewpoint A [2], from me, is that like resources listed in different 
granular request groups should mean that those resources will be sourced 
from *different* resource providers.


In other words, if I issue the following request:

GET /allocation_candidates?resources1=VCPU:1=VCPU:1

Then I am assured of getting allocation candidates that contain 2 
distinct resource providers consuming 1 VCPU from each provider.


Viewpoint B [3], from Eric, is that like resources listed in different 
granular request groups should not necessarily mean that those resources 
will be sourced from different resource providers. They *could* be 
sourced from different providers, or they could be sourced from the same 
provider.


Both proposals include ways to specify whether certain resources or 
whole request groups can be forced to be sources from either a single 
provider or from different providers.


In Viewpoint A, the proposal is to have a can_split=RESOURCE1,RESOURCE2 
query parameter that would indicate which resource classes in the 
unnumbered request group that may be split across multiple providers 
(remember that viewpoint A considers different request groups to 
explicitly mean different providers, so it doesn't make sense to have a 
can_split query parameter for numbered request groups).


In Viewpoint B, the proposal is to have a separate_providers=1,2 query 
parameter that would indicate that the identified request groups should 
be sourced from separate providers. Request groups that are not listed 
in the separate_providers query parameter are not guaranteed to be 
sourced from different providers.


I know this is a complex subject, but I thought it was worthwhile trying 
to explain the two proposals in as clear terms as I could muster.


I'm, quite frankly, a bit on the fence about the whole thing and would 
just like to have a clear path forward so that we can start landing the 
12+ patches that are queued up waiting for a decision on this.


Thoughts and opinions welcome.

Thanks,
-jay


[1] 
http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/granular-resource-requests.html


[2] https://review.openstack.org/#/c/560974/

[3] https://review.openstack.org/#/c/561717/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Meeting time and location are changed

2018-04-18 Thread Ivan Kolodyazhny
Hi,

It's just a reminder that we've got our meeting today at 15.00UTC at
openstack-meeting-alt channel.

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Mon, Apr 16, 2018 at 12:01 PM, Ivan Kolodyazhny  wrote:

> Hi team,
>
> Please be informed that Horizon meeting time has been changed [1]. We'll
> have our weekly meetings at 15.00 UTC starting this week at
> 'openstack-meeting-alt' channel. We had to change meeting channel too due
> to the conflict with others.
>
>
> [1] https://review.openstack.org/#/c/560979/
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] Documentation meeting today

2018-04-18 Thread Petr Kovar
Hi all,

The docs meeting will continue today at 16:00 UTC in
#openstack-doc, as scheduled. For more details, see the meeting page:

https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting

Cheers,
pk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [all] openstack-specs process

2018-04-18 Thread Lance Bragstad
Hi all,

There is a specification proposed to openstack/openstack-specs that
summarizes some outcomes from the PTG in Dublin [0].

The keystone team had some questions about what happens next regarding
that specification in this week's meeting [1]. What is the process for
that repository? Is there a schedule?

The Rocky release schedule doesn't seem to have any deadlines for
OpenStack specific specs [2]. I dug through the documentation in the
repository, but I didn't find anything describing the process [3] [4].

[0] https://review.openstack.org/#/c/523973/
[1]
http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-04-17-16.00.log.html#l-66
[2] https://releases.openstack.org/rocky/schedule.html
[3] https://specs.openstack.org/openstack/openstack-specs/readme.html
[4] https://specs.openstack.org/openstack/openstack-specs/contributing.html



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Ironic Inspector in the overcloud

2018-04-18 Thread Bogdan Dobrelya

On 4/18/18 12:07 PM, Derek Higgins wrote:

Hi All,

I've been testing the ironic inspector containerised service in the 
overcloud, the service essentially works but there is a couple of 
hurdles to tackle to set it up, the first of these is how to get  the 
IPA kernel and ramdisk where they need to be.


These need to be be present in the ironic_pxe_http container to be 
served out over http, whats the best way to get them there?


On the undercloud this is done by copying the files across the 
filesystem[1][2] to /httpboot  when we run "openstack overcloud image 
upload", but on the overcloud an alternative is required, could the 
files be pulled into the container during setup?


I'd prefer keep bind-mounting IPA kernel and ramdisk into a container 
via the /var/lib/ironic/httpboot host-path. So the question then becomes 
how to deliver those by that path for overcloud nodes?




thanks,
Derek

1 - 
https://github.com/openstack/python-tripleoclient/blob/3cf44eb/tripleoclient/v1/overcloud_image.py#L421-L433
2 - 
https://github.com/openstack/python-tripleoclient/blob/3cf44eb/tripleoclient/v1/overcloud_image.py#L181 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver

2018-04-18 Thread Chris Dent

On Tue, 17 Apr 2018, Thierry Carrez wrote:


So... Is there any specific topic you think we should cover in that
meeting ?


I'll bite. I've got two topics that I think are pretty critical to
address with the various segments of the community that are the
source of code commits and reviews. Neither of these are
specifically Board issues but are things are that I think are pretty
critical to discuss and address, and topics for which corporate
members of the foundation ought to be worried about.

These aren't fully formed ideas or questions, but I hope that before
we get to Vancouver they might evolve into concrete agenda items
with the usual feedback loops in email. I figure it is better to get
the ball rolling early than wait for perfection.

In the past on topics like this we've said "usually it's not the
right people at the board meeting to make headway on these kinds of
things". That's not our problem nor our responsibility. If the
people at the board meetings are designated representatives of the
corporate members it's their responsibility to hear our issues and
respond appropriately (even if that means, over the long term,
changing the people that are there). The health and productivity of
the community is what we should be concerned with.

The topics:

1. What are we to do, as a community, when external pressures for
results are not matched by contribution of resources to produce
those results? There are probably several examples of this, but one
that I'm particularly familiar with is the drive to be able to
satisfy complex hardware topologies demanded by virtual network
functions and related NFV use cases. Within nova, and I suspect other
projects, there is intense pressure to make progress and intense
effort that is removing resources from other areas. But the amount
of daily, visible contribution from the interest companies [1] is
_sometimes_ limited. There are many factors in this, and obviously
"throw more people at it" is not a silver bullet, but there are
things to talk about here that need the input from all the segments.

2. We've made progress of late with acknowledging the concepts
and importance of casual contribution and "drive-by bug fixing" in
our changing environment. But we've not yet made enough progress in
changing the way we do work. Corporate foundation members need to be
more aware and more accepting that the people they provide to work
"mostly upstream" need to be focused on making other people capable
of contribution. Not on getting features done. And those of us who
do have the privilege of being "mostly upstream" need to adjust our
priorities.

Somewhere in that screed are, I think, some things worth talking
about, but they need to be distilled out.

[1] http://superuser.openstack.org/articles/5g-open-source-att/

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Ironic Inspector in the overcloud

2018-04-18 Thread Derek Higgins
Hi All,

I've been testing the ironic inspector containerised service in the
overcloud, the service essentially works but there is a couple of hurdles
to tackle to set it up, the first of these is how to get  the IPA kernel
and ramdisk where they need to be.

These need to be be present in the ironic_pxe_http container to be served
out over http, whats the best way to get them there?

On the undercloud this is done by copying the files across the
filesystem[1][2] to /httpboot  when we run "openstack overcloud image
upload", but on the overcloud an alternative is required, could the files
be pulled into the container during setup?

thanks,
Derek

1 -
https://github.com/openstack/python-tripleoclient/blob/3cf44eb/tripleoclient/v1/overcloud_image.py#L421-L433
2 -
https://github.com/openstack/python-tripleoclient/blob/3cf44eb/tripleoclient/v1/overcloud_image.py#L181
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] z/VM introducing a new config driveformat

2018-04-18 Thread Chen CH Ji
Thanks for the concern and fully under it , the major reason is cloud-init
doesn't have a hook or plugin before it start to read config drive (ISO
disk)
z/VM is an old hypervisor and no way to do something like libvirt to define
a ISO format disk in xml definition, instead, it can define disks in the
defintion
of virtual machine and let VM to decide its format.

so we need a way to tell cloud-init where to find ISO file before
cloud-init start but without AE, we can't handle that...some update on the
spec here
for further information https://review.openstack.org/#/c/562154/

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82451493
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Michael Still 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   04/18/2018 05:08 PM
Subject:Re: [openstack-dev] [Nova] z/VM introducing a new config
driveformat



I'm confused about the design of AE to be honest. Is there a good reason
that this functionality couldn't be provided by cloud-init? I think there's
a lot of cost in deviating from the industry standard, so the reasons to do
so have to be really solid.

I'm also a bit confused by what seems to be support for streaming
configuration. Is there any documentation on the design of AE anywhere?

Thanks,
Michael

On Tue, Apr 17, 2018 at 6:58 PM, Chen CH Ji  wrote:
  For the question on AE documentation, it's open source in [1] and the
  documentation for how to build and use is [2]
  once our code is upstream, there are a set of documentation change which
  will cover this image build process by
  adding some links to there [3]

  You are right, we need image to have our Active Engine, I think different
  arch and platform might have their unique
  requirements and our solution our Active Engine is very like to
  cloud-init so no harm to add it from user's perspective
  I think later we can upload image to some place so anyone is able to
  consume it as test image if they like
  because different arch's image (e.g x86 and s390x) can't be shared
  anyway.

  For the config drive format you mentioned, actually, as previous
  explanation and discussion witho Michael and Dan,
  We found the iso9660 can be used (previously we made a bad assumption)
  and we already changed the patch in [4],
  so it's exactly same to other virt drivers you mentioned , we don't need
  special format and iso9660 works perfect for our driver

  It make sense to me we are temply moved out from runway, I suppose we can
  adjust the CI to enable the run_ssh = true
  with config drive functionalities very soon and we will apply for review
  after that with the test result requested in our CI log.

  Thanks

  [1]
  
https://github.com/mfcloud/python-zvm-sdk/blob/master/tools/share/zvmguestconfigure

  [2]
  
http://cloudlib4zvm.readthedocs.io/en/latest/makeimage.html#configuration-of-activation-engine-ae-in-zlinux

  [3] https://review.openstack.org/#/q/status:open+project:openstack/nova
  +branch:master+topic:bp/add-zvm-driver-rocky
  [4] https://review.openstack.org/#/c/527658/33/nova/virt/zvm/utils.py
  line 104

  Best Regards!

  Kevin (Chen) Ji 纪 晨

  Engineer, zVM Development, CSTL
  Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
  Phone: +86-10-82451493
  Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
  Beijing 100193, PRC

  Inactive hide details for melanie witt ---04/17/2018 09:21:03 AM---On
  Mon, 16 Apr 2018 14:56:06 +0800, Chen Ch Ji wrote: >  >>>melanie witt
  ---04/17/2018 09:21:03 AM---On Mon, 16 Apr 2018 14:56:06 +0800, Chen Ch
  Ji wrote: > >>>The "iso file" will not be inside the gu

  From: melanie witt 
  To: openstack-dev@lists.openstack.org
  Date: 04/17/2018 09:21 AM
  Subject: Re: [openstack-dev] [Nova] z/VM introducing a new config
  driveformat



  On Mon, 16 Apr 2018 14:56:06 +0800, Chen Ch Ji wrote:
  >  >>>The "iso file" will not be inside the guest, but rather passed to
  > the guest as a block device, right?
  > Cloud init expects to find a config drive with following requirements
  > [1], in order to make cloud init able to consume config drive , we
  > should be able to prepare it,
  > in some hypervisor, you can define something like following to the VM
  > then VM startup is able to consume it
  > 
  > but for z/VM case it allows disk to be created during VM create (define

  > )stage but no disk format set, it's the operating system's
  > responsibility to define the purpose of the
  > disk, so what we do is
  > 1) first when we build image ,we create a small AE like cloud-init but
  > only purpose is to get files from z/VM internal pipe and handle config
  > drive case

  What does AE stand for? So, this means in 

Re: [openstack-dev] [Nova] z/VM introducing a new config driveformat

2018-04-18 Thread Chen CH Ji
Added a update to the spec to the issues that requested
https://review.openstack.org/#/c/562154/ , including:

1) How the config drive (Metadata) defined
2) Special AE reason and why it's needed, also ,some documentation and
source code links
3) neutron agent for z/VM

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82451493
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   melanie witt 
To: Dan Smith 
Cc: openstack-dev@lists.openstack.org
Date:   04/18/2018 01:47 AM
Subject:Re: [openstack-dev] [Nova] z/VM introducing a new config
driveformat



On Tue, 17 Apr 2018 06:40:35 -0700, Dan Smith wrote:
>> I propose that we remove the z/VM driver blueprint from the runway at
>> this time and place it back into the queue while work on the driver
>> continues. At a minimum, we need to see z/VM CI running with
>> [validation]run_validation = True in tempest.conf before we add the
>> z/VM driver blueprint back into a runway in the future.
>
> Agreed. I also want to see the CI reporting cleaned up so that it's
> readable and consistent. Yesterday I pointed out some issues with the
> fact that the actual config files being used are not the ones being
> uploaded. There are also duplicate (but not actually identical) logs
> from all services being uploaded, including things like a full compute
> log from starting with the libvirt driver.

Yes, we definitely need to see all of these issues fixed.

> I'm also pretty troubled by the total lack of support for the metadata
> service. I know it's technically optional on our matrix, but it's a
> pretty important feature for a lot of scenarios, and it's also a
> dependency for other features that we'd like to have wider support for
> (like attached device metadata).
>
> Going back to the spec, I see very little detail on some of the things
> raised here, and very (very) little review back when it was first
> approved. I'd also like to see more detail be added to the spec about
> all of these things, especially around required special changes like
> this extra AE agent.

Agreed, can someone from the z/VM team please propose an update to the
driver spec to document these details?

Thanks,
-melanie











__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=CCtWdN4OOlqKzrLg4ctuY1D_fHo8wvps59hVs35J8ys=wHuQV89_dwXLe15VAkg8_UOBPfjD72vB0_47W6BgRVk=




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg]Weekly Team Meeting 2018.04.18

2018-04-18 Thread Zhipeng Huang
Hi Team,

Weekly meeting as usual starting UTC1400 at #openstack-cyborg, initial
agenda as follows:

1. MS1 preparation
2. bug report on storyboard
3. Rocky critical spec review
4. open patches discussion

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote]Retire kolla-kubernetes project

2018-04-18 Thread Paul Bourke

+1

On 18/04/18 02:51, Jeffrey Zhang wrote:
Since many of the contributors in the kolla-kubernetes project are moved 
to other things. And there is no active contributor for months.  On the 
other hand, there is another comparable project, openstack-helm, in the 
community.  For less confusion and disruptive community resource, I 
propose to retire the kolla-kubernetes project.


More discussion about this you can check the mail[0] and patch[1]

please vote +1 to retire the repo, or -1 not to retire the repo. The 
vote will be open until everyone has voted, or for 1 week until April 
25th, 2018.


[0] 
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128822.html

[1] https://review.openstack.org/552531

--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][neutron][requirements][pbr]Use git+https line in requirements.txt break the pip install

2018-04-18 Thread Michel Peterson
On Wed, Apr 18, 2018 at 12:02 PM, Michel Peterson  wrote:

> How can we fix this? There are several ways I can think of the top of my
> head:
>
>
>1. When encountered with edge cases like this one, first install that
>dependency with a manual pip run [2]
>2. Modify pbr to handle these situations by handling the installation
>of those depenencies differently with a workaround to the current
>functionality of pip
>3. Leverage on the work of corvus [3] to not only do what that patch
>is doing, but also including the checked out path of the dependency in
>PIP_FIND_LINKS, that way pip knows how to solve the issue.
>
> All these solutions have different set of pros and cons, but I favor #3 as
> the long term solution, #1 as short term and I think #2 requires further
> analysis by the pbr team.
>

I forgot to add the reference on where to add the PIP_FIND_LINKS for
solution #3, here you go:

https://github.com/openstack-dev/devstack/blob/f99d1771ba1882dfbb69186212a197edae3ef02c/inc/python#L362
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] z/VM introducing a new config driveformat

2018-04-18 Thread Michael Still
I'm confused about the design of AE to be honest. Is there a good reason
that this functionality couldn't be provided by cloud-init? I think there's
a lot of cost in deviating from the industry standard, so the reasons to do
so have to be really solid.

I'm also a bit confused by what seems to be support for streaming
configuration. Is there any documentation on the design of AE anywhere?

Thanks,
Michael

On Tue, Apr 17, 2018 at 6:58 PM, Chen CH Ji  wrote:

> For the question on AE documentation, it's open source in [1] and the
> documentation for how to build and use is [2]
> once our code is upstream, there are a set of documentation change which
> will cover this image build process by
> adding some links to there [3]
>
> You are right, we need image to have our Active Engine, I think different
> arch and platform might have their unique
> requirements and our solution our Active Engine is very like to cloud-init
> so no harm to add it from user's perspective
> I think later we can upload image to some place so anyone is able to
> consume it as test image if they like
> because different arch's image (e.g x86 and s390x) can't be shared anyway.
>
> For the config drive format you mentioned, actually, as previous
> explanation and discussion witho Michael and Dan,
> We found the iso9660 can be used (previously we made a bad assumption) and
> we already changed the patch in [4],
> so it's exactly same to other virt drivers you mentioned , we don't need
> special format and iso9660 works perfect for our driver
>
> It make sense to me we are temply moved out from runway, I suppose we can
> adjust the CI to enable the run_ssh = true
> with config drive functionalities very soon and we will apply for review
> after that with the test result requested in our CI log.
>
> Thanks
>
> [1] https://github.com/mfcloud/python-zvm-sdk/blob/master/
> tools/share/zvmguestconfigure
> [2] http://cloudlib4zvm.readthedocs.io/en/latest/
> makeimage.html#configuration-of-activation-engine-ae-in-zlinux
> [3] https://review.openstack.org/#/q/status:open+project:
> openstack/nova+branch:master+topic:bp/add-zvm-driver-rocky
> [4] https://review.openstack.org/#/c/527658/33/nova/virt/zvm/utils.py
> line 104
>
> Best Regards!
>
> Kevin (Chen) Ji 纪 晨
>
> Engineer, zVM Development, CSTL
> Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
> Phone: +86-10-82451493
> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
> Beijing 100193, PRC
>
> [image: Inactive hide details for melanie witt ---04/17/2018 09:21:03
> AM---On Mon, 16 Apr 2018 14:56:06 +0800, Chen Ch Ji wrote: > >>>]melanie
> witt ---04/17/2018 09:21:03 AM---On Mon, 16 Apr 2018 14:56:06 +0800, Chen
> Ch Ji wrote: > >>>The "iso file" will not be inside the gu
>
> From: melanie witt 
> To: openstack-dev@lists.openstack.org
> Date: 04/17/2018 09:21 AM
> Subject: Re: [openstack-dev] [Nova] z/VM introducing a new config
> driveformat
> --
>
>
>
> On Mon, 16 Apr 2018 14:56:06 +0800, Chen Ch Ji wrote:
> >  >>>The "iso file" will not be inside the guest, but rather passed to
> > the guest as a block device, right?
> > Cloud init expects to find a config drive with following requirements
> > [1], in order to make cloud init able to consume config drive , we
> > should be able to prepare it,
> > in some hypervisor, you can define something like following to the VM
> > then VM startup is able to consume it
> > 
> > but for z/VM case it allows disk to be created during VM create (define
> > )stage but no disk format set, it's the operating system's
> > responsibility to define the purpose of the
> > disk, so what we do is
> > 1) first when we build image ,we create a small AE like cloud-init but
> > only purpose is to get files from z/VM internal pipe and handle config
> > drive case
>
> What does AE stand for? So, this means in order to use the z/VM driver,
> users must have special images that will ensure the config drive will be
> readable by cloud-init. They can't use standard cloud images.
>
> > 2) During spawn we create config drive in nova-compute side then send
> > the file to z/VM through z/VM internal pipe (omit detail here)
> > 3) During startup of the virtual machine, the small AE is able to mount
> > the file as loop device and then in turn cloud-init is able to handle it
> >
> > because this is our special case, we don't want to upload to cloud-init
> > community because of uniqueness and as far as we can tell, no hook in
> > cloud-init mechanism allowed as well
> > to let us 'mount -o loop' ; also, from openstack point of view except
> > this small AE (which is documented well) no special thing and
> > inconsistent to other drivers
> >
> > [1]https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__github.com_number5_cloud-2Dinit_blob_master_cloudinit_
> sources_DataSourceConfigDrive.py-23L225=DwIGaQ=jf_
> iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=

Re: [openstack-dev] [kolla][neutron][requirements][pbr]Use git+https line in requirements.txt break the pip install

2018-04-18 Thread Michel Peterson
Hi, I'm one of the networking-odl core devs.

On Wed, Apr 18, 2018 at 5:48 AM, Jeffrey Zhang 
wrote:

>
> Recently, one of networking-odl package breaks kolla's gate[0]. The direct
> issue is ceilometer is added in networking-odl's requirements.txt file[1]
>

This is an issue that concerns me too. First off let me start with a simple
solution, which is to install ceilometer from git before requiring
networking-odl. Also, if networking-odl is installed through devstack's
enable_plugin this issue wouldn't arise (as the plugin.sh takes care of
installing ceilometer before installing networking-odl).

Still, I see this as a problem, I just didn't find a way to solve it in
general, except ceilometer being published to PyPI. What happened then is I
got caught up in other priorities that took bandwidth away from it and
kinda forgot about it.


>
> Then when install network-odl with upper-contraints.txt file, it will
> raise error like
>
> $ pip install -c https://git.openstack.org/cgit/openstack/requirements/
> plain/upper-constraints.txt ./networking-odl
> ...
> collecting networking-bgpvpn>=8.0.0 (from networking-odl==12.0.1.dev54)
>   Downloading http://pypi.doubanio.com/packages/5a/e5/
> 995be0d53d472f739a7a0bb6c9d9fecbc4936148651aaf56d39f3b65b1f1
> /networking_bgpvpn-8.0.0-py2-none-any.whl (172kB)
> 100% || 174kB 12.0MB/s
> Collecting ceilometer (from networking-odl==12.0.1.dev54)
>   Could not find a version that satisfies the requirement ceilometer (from
> networking-odl==12.0.1.dev54) (from versions: )
> No matching distribution found for ceilometer (from
> networking-odl==12.0.1.dev54)
>
>
> But if you just install the networking-odl's requirements.txt file, it
> works
>
>
> $ pip install -c https://git.openstack.org/cgit/openstack/requirements/
> plain/upper-constraints.txt -r ./networking-odl/requirements.txt
> ...
> Obtaining ceilometer from git+https://git.openstack.org/
> openstack/ceilometer@master#egg=ceilometer (from -r
> networking-odl/requirements.txt (line 21))
>   Cloning https://git.openstack.org/openstack/ceilometer (to revision
> master) to /home/jeffrey/.dotfiles/virtualenvs/test/src/ceilometer
> ...
>
>
> Is this expected? and how could we fix this?
>

This is an interesting case of how pip works differently when installing
from a requirements file or from a folder (as it would happen with -e or
the first command you issued). While in the former it knows how to solve
the dependencies correctly, in the second it actually relies in the
setup.py file to install. That means it goes into pbr's realm and does not
use the requirements at all. So let's analyse what happens in pbr.

Internally in PBR what is doing is reading the requirements.txt, finding
the -e line, reading it's comment that says #egg=ceilometer and adding that
as a requirement [1]. What is failing to do though, is to instruct pip to
fetch it from the git repository (as the requirements file would do).
Sadly, this is not only a problem of pbr but it's also a limitation of the
current state of pip and the corresponding PEPs, which apparently is
already addressed for the long term with new PEPs and upcoming changes to
pip.

How can we fix this? There are several ways I can think of the top of my
head:


   1. When encountered with edge cases like this one, first install that
   dependency with a manual pip run [2]
   2. Modify pbr to handle these situations by handling the installation of
   those depenencies differently with a workaround to the current
   functionality of pip
   3. Leverage on the work of corvus [3] to not only do what that patch is
   doing, but also including the checked out path of the dependency in
   PIP_FIND_LINKS, that way pip knows how to solve the issue.

All these solutions have different set of pros and cons, but I favor #3 as
the long term solution, #1 as short term and I think #2 requires further
analysis by the pbr team.


Hope my contribution helped to clarify this issue.


[1]:
https://github.com/openstack-dev/pbr/blob/7767c44ab1289ed7d1cc4f9e12986bef07865d5c/pbr/packaging.py#L168

[2]:
https://github.com/openstack/networking-odl/blob/aa3acb23a5736f128fee0a514a588b9035551d88/devstack/entry_points#L259

[3]: https://review.openstack.org/549252/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] The Weekly Owl - 17th Edition

2018-04-18 Thread Bogdan Dobrelya

On 4/18/18 3:54 AM, Wesley Hayutin wrote:



On Tue, Apr 17, 2018 at 9:44 PM Emilien Macchi > wrote:


Note: this is the seventeeth edition of a weekly update of what
happens in TripleO.
The goal is to provide a short reading (less than 5 minutes) to
learn where we are and what we're doing.
Any contributions and feedback are welcome.
Link to the previous version:
http://lists.openstack.org/pipermail/openstack-dev/2018-April/129255.html

+-+
| General announcements |
+-+

+--> Rocky milestone 1 will be released this week (probably tomorrow)!
+--> (reminder) if you're looking at reproducing a CI job, checkout:
https://docs.openstack.org/tripleo-docs/latest/contributor/reproduce-ci.html

+--+
| Continuous Integration |
+--+

+--> Ruck is quiquell and Rover is panda. Please let them know any
new CI issue.
+--> Master promotion is 1 day, Queens is 2 days, Pike is 4 days and
Ocata is 5 days.
+--> Efforts around libvirt based multinode reproducer, see
https://trello.com/c/JEGLSVh6/323-reproduce-ci-jobs-with-libvirt
+--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meet
ing and https://goo.gl/D4WuBP


So, just to add some context.  We would like to be able to setup libvirt 
guests in the same way nodepool nodes are setup to allow the ci team and 
others to reexecute upstream ci jobs on libvirt using the exact workflow 
that upstream jobs take.


A reminder the current reproduce scripts are documented here [1].  We 
plan on updating the current doc with our libvirt work when it is 
ready.   Thanks all


This is really great effort! Thank you for doing this. Will this also 
bring the deployed servers feature into libvirt setups?




[1] http://tripleo.org/contributor/reproduce-ci.html



+-+
| Upgrades |
+-+

+--> Progress on FFU CLI in tripleoclient, need reviews.
+--> Work for containerized undercloud upgrades has been merged.
Testing will make progress after rocky-m1 (with new tags).
+--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad
-status

+---+
| Containers |
+---+

+--> Still working on UX problems
+--> Still working on container workflow, good progress last week
where container prepare isn't needed. Now working on container updates.
+--> Investigating how to bootstrap Docker + Registry before
deploying containers
+--> Progress on routed networks support
+--> More:https://etherpad.openstack.org/p/tripleo-containers-sq
uad-status

+--+
| config-download |
+--+

+--> Moving to config-download by default is coming very soon (once
Ceph patches land).
+--> Ceph was migrated and all patches are going to merge this week.
+--> octavia/skydive migration is wip.
+--> Improving deploy-steps-tasks.j2 to improve playbook readability
and memory consumption
+--> UI work is work in progress.
+--> More:https://etherpad.openstack.org/p/tripleo-config-downlo
ad-squad-status

+--+
| Integration |
+--+

+--> No updates.
+--> More:https://etherpad.openstack.org/p/tripleo-integration-s
quad-status

+-+
| UI/CLI |
+-+

+--> Efforts on config-download integration
+--> Added type to ansible-playbook messages (feedback needed)
+--> More:https://etherpad.openstack.org/p/tripleo-ui-cli-squad-
status

+---+
| Validations |
+---+

+--> No updates.
+--> More: https://etherpad.openstack.org/p/tripleo-validations-s
quad-status

+---+
| Networking |
+---+

+--> No updates this week.
+--> More:https://etherpad.openstack.org/p/tripleo-networking-sq
uad-status

+--+
| Workflows |
+--+

+--> Need reviews, see etherpad.
+--> Working on workflows v2
+--> More:https://etherpad.openstack.org/p/tripleo-workflows-squ
ad-status

+---+
| Security |
+---+

+--> Tomorrow's meeting is about Storyboard migration and Secret
management.
+--> More:https://etherpad.openstack.org/p/tripleo-security-squa
d

++
| Owl fact  |
++

Did you know owls were watching you while 

Re: [openstack-dev] [kolla][neutron][requirements][pbr]Use git+https line in requirements.txt break the pip install

2018-04-18 Thread thomas.morin
As I understand, this is due to a not-yet-completed transition in 
networking-odl after stopping the use of the tools/tox_install.sh and 
relying on the tox-sibling CI role instead.


I'm not able to explain the difference between the two "pip install" run 
variants that you see, though.


For the record, a distinct side effect of the same incomplete transition 
is also tracked in [1] : having networking-bgpvpn depend on 
networking-odl from git (relying on black-magic by the tox-siblings 
ansible role and 'required-project' job configuration) would not work 
anymore after the change in networking-odl to depend on ceilometer with 
'-e git+...'.


-Thomas

[1] https://bugs.launchpad.net/networking-odl/+bug/1764371


On 18/04/2018 04:48, Jeffrey Zhang wrote:


Recently, one of networking-odl package breaks kolla's gate[0]. The 
direct issue is ceilometer is added in networking-odl's 
requirements.txt file[1]


Then when install network-odl with upper-contraints.txt file, it will 
raise error like


$ pip install -c 
https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt 
./networking-odl

...
collecting networking-bgpvpn>=8.0.0 (from networking-odl==12.0.1.dev54)
Downloading 
http://pypi.doubanio.com/packages/5a/e5/995be0d53d472f739a7a0bb6c9d9fecbc4936148651aaf56d39f3b65b1f1/networking_bgpvpn-8.0.0-py2-none-any.whl 
(172kB)

  100% || 174kB 12.0MB/s
Collecting ceilometer (from networking-odl==12.0.1.dev54)
Could not find a version that satisfies the requirement ceilometer 
(from networking-odl==12.0.1.dev54) (from versions: )
No matching distribution found for ceilometer (from 
networking-odl==12.0.1.dev54)



But if you just install the networking-odl's requirements.txt file, it 
works



$ pip install -c 
https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt 
-r ./networking-odl/requirements.txt

...
Obtaining ceilometer from 
git+https://git.openstack.org/openstack/ceilometer@master#egg=ceilometer 
(from -r networking-odl/requirements.txt (line 21))
  Cloning https://git.openstack.org/openstack/ceilometer (to revision 
master) to /home/jeffrey/.dotfiles/virtualenvs/test/src/ceilometer

...


Is this expected? and how could we fix this?


[0] https://bugs.launchpad.net/kolla/+bug/1764621
[1] 
https://github.com/openstack/networking-odl/blob/master/requirements.txt#L21


-
​​
-
Regards,
Jeffrey Zhang
Blog: http://xcodest.me 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Problems with Openstack services while migrating VMs

2018-04-18 Thread Jean-Philippe Evrard
Maybe worth posting on operators, but it looks like the scheduling of
the action fails, which let me think that nova is not running fine
somewhere.

Why is the restart in a random order? That can cause issues, and
that's the whole reason why we are orchestrating the deploys/upgrade
with ansible.
Also, why don't you follow our operations guide for recovering for a
failure? Is there something wrong there?

Regards,
JP

On 17 April 2018 at 14:07, Periyasamy Palanisamy
 wrote:
> Hi,
>
>
>
> I’m trying to migrate controller and compute VMs installed with
> Openstack-Ansible across systems with following approach.
>
> This is mainly to minimize the deployment time in the Jenkins CI
> environment.
>
>
>
> Export steps:
>
> Power off the VMs gracefully.
> virsh dumpxml ${node} > $EXPORT_PATH/${node}.xml
> cp /var/lib/libvirt/images/${node}.qcow2 $EXPORT_PATH/$node.qcow2
> create a tar ball for the xml’s and qcow2 images.
>
>
>
> Import steps:
>
> cp ${node}.qcow2 /var/lib/libvirt/images/
> virsh define ${node}.xml
> virsh start ${node}
>
>
>
> After the import of the VMs, The openstack services (neutron-server, DHCP
> agent, Metering agent, Metadata agent, L3 agent, Open vSwitch agent,
> nova-conductor and nova-comute) are started in random order.
>
> This causes neutron and nova is not able to find DHCP agent and compute
> accordingly to bring up the tenant VM and throws the error [1].
>
>
>
> I have also tried to boot compute VM followed by controller VM. It also
> doesn’t help.
>
> Could you please let me know what is going wrong here ?
>
>
>
> [1] https://paste.ubuntu.com/p/YNg2NnjvpS/ (fault section)
>
>
>
> Thanks,
>
> Periyasamy
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote]Retire kolla-kubernetes project

2018-04-18 Thread Takashi Sogabe
+1

From: Jeffrey Zhang [mailto:zhang.lei@gmail.com]
Sent: Wednesday, April 18, 2018 10:52 AM
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [kolla][vote]Retire kolla-kubernetes project

Since many of the contributors in the kolla-kubernetes project are moved to 
other things. And there is no active contributor for months.  On the other 
hand, there is another comparable project, openstack-helm, in the community.  
For less confusion and disruptive community resource, I propose to retire the 
kolla-kubernetes project.

More discussion about this you can check the mail[0] and patch[1]

please vote +1 to retire the repo, or -1 not to retire the repo. The vote will 
be open until everyone has voted, or for 1 week until April 25th, 2018.

[0] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128822.html
[1] https://review.openstack.org/552531

--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev