Re: [openstack-dev] [helm] multiple nova compute nodes

2018-10-02 Thread Steve Wilkerson
In addition to targeting nodes by labels (these labels are exposed for
overrides in the move chart’s values.yaml, so they can be whatever labels
you wish them to be), you can also disable particular templates in the nova
chart.  You can find these under the ‘manifests:’ key in the chart’s
values.yaml. Each template in the nova chart will have a key that can
toggle whether you deploy that template or not, and these keys should be
named similar to the templates they control.  With this, you can exclude
particular nova components if you desire.

Hope that helps clear things up.

Cheers,
Steve

On Tue, Oct 2, 2018 at 6:04 PM Chris Friesen 
wrote:

> On 10/2/2018 4:15 PM, Giridhar Jayavelu wrote:
> > Hi,
> > Currently, all nova components are packaged in same helm chart "nova".
> Are there any plans to separate nova-compute from rest of the services ?
> > What should be the approach for deploying multiple nova computes nodes
> using OpenStack helm charts?
>
> The nova-compute pods are part of a daemonset which will automatically
> create a nova-compute pod on each node that has the
> "openstack-compute-node=enabled" label.
>
> Chris
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [helm] multiple nova compute nodes

2018-10-02 Thread Chris Friesen

On 10/2/2018 4:15 PM, Giridhar Jayavelu wrote:

Hi,
Currently, all nova components are packaged in same helm chart "nova". Are 
there any plans to separate nova-compute from rest of the services ?
What should be the approach for deploying multiple nova computes nodes using 
OpenStack helm charts?


The nova-compute pods are part of a daemonset which will automatically 
create a nova-compute pod on each node that has the 
"openstack-compute-node=enabled" label.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [helm] multiple nova compute nodes

2018-10-02 Thread Giridhar Jayavelu
Hi,
Currently, all nova components are packaged in same helm chart "nova". Are 
there any plans to separate nova-compute from rest of the services ?
What should be the approach for deploying multiple nova computes nodes using 
OpenStack helm charts?

Thanks,
Giri

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [publiccloud-wg] Reminder weekly meeting Public Cloud WG

2018-10-02 Thread Tobias Rydberg

Hi everyone,

Time for a new meeting for PCWG - 3rd October 0700 UTC in 
#openstack-publiccloud! Agenda found at 
https://etherpad.openstack.org/p/publiccloud-wg


Talk to you in a couple of hours!

Cheers,
Tobias

--
Tobias Rydberg
Senior Developer
Twitter & IRC: tobberydberg

www.citynetwork.eu | www.citycloud.com

INNOVATION THROUGH OPEN IT INFRASTRUCTURE
ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] nominating Amit Oren for manila core

2018-10-02 Thread Rodrigo Barbieri
+1

--
Rodrigo Barbieri
MSc Computer Scientist
OpenStack Manila Core Contributor
Federal University of São Carlos

On Tue, Oct 2, 2018, 17:02 Jay S Bryant  wrote:

> As a friend of Manila I am definitely +1 except that Cinder would like
> him back full time.  ;-)
>
> Jay
>
>
> On 10/2/2018 12:58 PM, Tom Barron wrote:
> > Amit Oren has contributed high quality reviews in the last couple of
> > cycles so I would like to nominated him for manila core.
> >
> > Please respond with your +1 or -1 votes.  We'll hold voting open for 7
> > days.
> >
> > Thanks,
> >
> > -- Tom Barron (tbarron)
> >
> >
> >
> __
> >
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] nominating Amit Oren for manila core

2018-10-02 Thread Jay S Bryant
As a friend of Manila I am definitely +1 except that Cinder would like 
him back full time.  ;-)


Jay


On 10/2/2018 12:58 PM, Tom Barron wrote:
Amit Oren has contributed high quality reviews in the last couple of 
cycles so I would like to nominated him for manila core.


Please respond with your +1 or -1 votes.  We'll hold voting open for 7 
days.


Thanks,

-- Tom Barron (tbarron)


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] Stable Core Team Update

2018-10-02 Thread Brian Haley

+1 from me :)

-Brian

On 10/02/2018 11:41 AM, Miguel Lavalle wrote:

Hi Stable Team,

I want to nominate Bernard Cafarrelli as a stable core reviewer for 
Neutron and related projects. Bernard has been increasing the number of 
stable reviews he is doing for the project [1]. Besides that, he is a 
stable maintainer downstream for his employer (Red Hat), so he can bring 
that valuable experience to the Neutron stable team.


Thanks and regards

Miguel

[1] 
https://review.openstack.org/#/q/(project:openstack/neutron+OR+openstack/networking-sfc+OR+project:openstack/networking-ovn)++branch:%255Estable/.*+reviewedby:%22Bernard+Cafarelli+%253Cbcafarel%2540redhat.com%253E%22 




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] Stable Core Team Update

2018-10-02 Thread Sean McGinnis
On Tue, Oct 02, 2018 at 01:45:38PM -0500, Matt Riedemann wrote:
> On 10/2/2018 10:41 AM, Miguel Lavalle wrote:
> > Hi Stable Team,
> > 
> > I want to nominate Bernard Cafarrelli as a stable core reviewer for
> > Neutron and related projects. Bernard has been increasing the number of
> > stable reviews he is doing for the project [1]. Besides that, he is a
> > stable maintainer downstream for his employer (Red Hat), so he can bring
> > that valuable experience to the Neutron stable team.
> > 
> > Thanks and regards
> > 
> > Miguel
> > 
> > [1] 
> > https://review.openstack.org/#/q/(project:openstack/neutron+OR+openstack/networking-sfc+OR+project:openstack/networking-ovn)++branch:%255Estable/.*+reviewedby:%22Bernard+Cafarelli+%253Cbcafarel%2540redhat.com%253E%22
> >  
> > 
> 
> +1 from me.
> 
> -- 
> 
> Thanks,
> 
> Matt

+1 from me as well.

Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] Stable Core Team Update

2018-10-02 Thread Matt Riedemann

On 10/2/2018 10:41 AM, Miguel Lavalle wrote:

Hi Stable Team,

I want to nominate Bernard Cafarrelli as a stable core reviewer for 
Neutron and related projects. Bernard has been increasing the number of 
stable reviews he is doing for the project [1]. Besides that, he is a 
stable maintainer downstream for his employer (Red Hat), so he can bring 
that valuable experience to the Neutron stable team.


Thanks and regards

Miguel

[1] 
https://review.openstack.org/#/q/(project:openstack/neutron+OR+openstack/networking-sfc+OR+project:openstack/networking-ovn)++branch:%255Estable/.*+reviewedby:%22Bernard+Cafarelli+%253Cbcafarel%2540redhat.com%253E%22 



+1 from me.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] nominating Amit Oren for manila core

2018-10-02 Thread Victoria Martínez de la Cruz
+1

:D

El mar., 2 de oct. de 2018 a la(s) 15:00, Xing Yang (xingyang...@gmail.com)
escribió:

> +1
>
> On Tue, Oct 2, 2018 at 1:58 PM Tom Barron  wrote:
>
>> Amit Oren has contributed high quality reviews in the last couple of
>> cycles so I would like to nominated him for manila core.
>>
>> Please respond with your +1 or -1 votes.  We'll hold voting open for 7
>> days.
>>
>> Thanks,
>>
>> -- Tom Barron (tbarron)
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] nominating Amit Oren for manila core

2018-10-02 Thread Xing Yang
+1

On Tue, Oct 2, 2018 at 1:58 PM Tom Barron  wrote:

> Amit Oren has contributed high quality reviews in the last couple of
> cycles so I would like to nominated him for manila core.
>
> Please respond with your +1 or -1 votes.  We'll hold voting open for 7
> days.
>
> Thanks,
>
> -- Tom Barron (tbarron)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] nominating Amit Oren for manila core

2018-10-02 Thread Tom Barron
Amit Oren has contributed high quality reviews in the last couple of 
cycles so I would like to nominated him for manila core.


Please respond with your +1 or -1 votes.  We'll hold voting open for 7 
days.


Thanks,

-- Tom Barron (tbarron)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Cancelling the notification subteam weekly meeting indefinitely

2018-10-02 Thread Balázs Gibizer
Hi,

Due to low amount of ongoing work in the area there is a low interest to
have this meeting going. So I'm cancelling it indefinitely[1].

Of course I'm still intereseted in helping any notification related 
work in the future and you can reach me in #openstack-nova as usual.

cheers,
gibi


[1]https://review.openstack.org/#/c/607314/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-40

2018-10-02 Thread Dmitry Tantsur

++ that was very helpful, thanks Chris!

On 10/2/18 6:18 PM, Steven Dake (stdake) wrote:

Chris,

Thanks for all the hard work you have put into this.  FWIW I found value in 
your reports, but perhaps because I am not involved in the daily activities of 
the TC.

Cheers
-steve


On 10/2/18, 8:25 AM, "Chris Dent"  wrote:

 
 HTML: https://anticdent.org/tc-report-18-40.html
 
 I'm going to take a break from writing the TC reports for a while.

 If other people (whether on the TC or not) are interested in
 producing their own form of a subjective review of the week's TC
 activity, I very much encourage you to do so. It's proven an
 effective way to help at least some people maintain engagement.
 
 I may pick it up again when I feel like I have sufficient focus and

 energy to produce something that has more value and interpretation
 than simply pointing at
 [the IRC logs](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/).
 However, at this time, I'm not producing a product that is worth the
 time it takes me to do it and the time it takes away from doing
 other things. I'd rather make more significant progress on fewer
 things.
 
 In the meantime, please join me in congratulating and welcoming the

 newly elected members of the TC: Lance Bragstad, Jean-Philippe
 Evrard, Doug Hellman, Julia Kreger, Ghanshyam Mann, and Jeremy
 Stanley.
 
 
 --

 Chris Dent   ٩◔̯◔۶   https://anticdent.org/
 freenode: cdent tw: @anticdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-02 Thread Dmitry Tantsur

On 10/2/18 6:17 PM, Mark Goddard wrote:



On Tue, 2 Oct 2018 at 17:10, Jim Rollenhagen > wrote:


On Tue, Oct 2, 2018 at 11:40 AM Eric Fried  wrote:

 > What Eric is proposing (and Julia and I seem to be in favor of), is
 > nearly the same as your proposal. The single difference is that these
 > config templates or deploy templates or whatever could *also* require
 > certain traits, and the scheduler would use that information to pick 
a
 > node. While this does put some scheduling information into the config
 > template, it also means that we can remove some of the flavor 
explosion
 > *and* mostly separate scheduling from configuration.
 >
 > So, you'd have a list of traits on a flavor:
 >
 > required=HW_CPU_X86_VMX,HW_NIC_ACCEL_IPSEC
 >
 > And you would also have a list of traits in the deploy template:
 >
 > {"traits": {"required": ["STORAGE_HARDWARE_RAID"]}, "config": }
 >
 > This allows for making flavors that are reasonably flexible (instead 
of
 > two flavors that do VMX and IPSEC acceleration, one of which does 
RAID).
 > It also allows users to specify a desired configuration without also
 > needing to know how to correctly choose a flavor that can handle that
 > configuration.
 >
 > I think it makes a lot of sense, doesn't impose more work on users, 
and
 > can reduce the number of flavors operators need to manage.
 >
 > Does that make sense?

This is in fact exactly what Jay proposed. And both Julia and I are in
favor of it as an ideal long-term solution. Where Julia and I deviated
from Jay's point of view was in our desire to use "the hack" in the
short term so we can satisfy the majority of use cases right away
without having to wait for that ideal solution to materialize.


Ah, good point, I had missed that initially. Thanks. Let's do that.

So if we all agree Jay's proposal is the right thing to do, is there any
reason to start working on a short-term hack instead of putting those
efforts into the better solution? I don't see why we couldn't get that done
in one cycle, if we're all in agreement on it.


I'm still unclear on the ironic side of this. I can see that config of some sort 
is stored in glance, and referenced upon nova server creation. Somehow this 
would be synced to ironic by the nova virt driver during node provisioning. The 
part that's missing in my mind is how to map from a config in glance to a set of 
actions performed by ironic. Does the config in glance reference a deploy 
template, or a set of ironic deploy steps? Or does ironic (or OpenStack) define 
some config schema that it supports, and use it to generate a set of deploy steps?


I think the most straightforward way is through the same deploy steps mechanism 
we planned. Make the virt driver fetch the config from glance, then pass it to 
the provisioning API. As a bonus, we'll get the same API workflow with 
standalone and nova case.





// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [goal][python3] week 8 update

2018-10-02 Thread Doug Hellmann
This is week 8 of the "Run under Python 3 by default" goal
(https://governance.openstack.org/tc/goals/stein/python3-first.html). 

== Ongoing and Completed Work ==

I proposed a large set of new patches to update the tox settings for
repositories that were still using python2 for doc, release note,
linter, etc. jobs. Quite a few of those were duplicates, so if you find
that someone else has already started that work please vote -1 on my
patch with a link to the other one, and I'll abandon mine.

+-+-+--+-+--+-++---++
| Team| zuul| tox defaults | Docs| 3.6 unit 
| Failing | Unreviewed | Total | Champion   |
+-+-+--+-+--+-++---++
| adjutant| +   |   1/  1  | -   | +
|   0 |  1 | 6 | Doug Hellmann  |
| barbican|   7/ 13 | +|   1/  3 | +
|   6 |  4 |20 | Doug Hellmann  |
| blazar  | +   | +| +   | +
|   0 |  0 |25 | Nguyen Hai |
| Chef OpenStack  | +   |   2/  2  | -   | -
|   1 |  1 | 3 | Doug Hellmann  |
| cinder  | +   |   1/  3  | +   | +
|   0 |  1 |33 | Doug Hellmann  |
| cloudkitty  | +   | +| +   | +
|   0 |  0 |26 | Doug Hellmann  |
| congress| +   |   1/  3  | +   | +
|   1 |  1 |25 | Nguyen Hai |
| cyborg  | +   | +| +   | +
|   0 |  0 |16 | Nguyen Hai |
| designate   | +   |   2/  4  | +   | +
|   0 |  1 |26 | Nguyen Hai |
| Documentation   | +   |   1/  5  | +   | +
|   1 |  1 |23 | Doug Hellmann  |
| dragonflow  | +   | -| +   | +
|   0 |  0 | 6 | Nguyen Hai |
| ec2-api | +   |   2/  2  | +   | +
|   2 |  2 |14 ||
| freezer | waiting for cleanup |   1/  5  | +   | +
|   0 |  1 |34 ||
| glance  | +   |   1/  4  | +   | +
|   0 |  0 |26 | Nguyen Hai |
| heat|   3/ 27 |   4/  8  |   1/  6 |   1/  7  
|   2 |  4 |48 | Doug Hellmann  |
| horizon | +   |   1/ 32  | +   | +
|   0 |  1 |42 | Nguyen Hai |
| I18n| +   |   1/  1  | -   | -
|   0 |  0 | 3 | Doug Hellmann  |
| InteropWG   | +   |   4/  4  | +   |   1/  3  
|   2 |  4 |14 | Doug Hellmann  |
| ironic  | +   |   1/ 10  | +   | +
|   0 |  1 |95 | Doug Hellmann  |
| karbor  | +   | +| +   | +
|   0 |  0 |22 | Nguyen Hai |
| keystone| +   |   1/  7  | +   | +
|   0 |  0 |48 | Doug Hellmann  |
| kolla   | +   |   1/  1  | +   | +
|   1 |  0 |13 ||
| kuryr   | +   | +| +   | +
|   0 |  0 |20 | Doug Hellmann  |
| magnum  | +   |   2/  5  | +   | +
|   0 |  1 |27 ||
| manila  | +   |   4/  8  | +   | +
|   0 |  0 |32 | Goutham Pacha Ravi |
| masakari| +   |   3/  5  | +   | -
|   0 |  3 |24 | Nguyen Hai |
| mistral | +   | +| +   | +
|   0 |  0 |38 | Nguyen Hai |
| monasca |   1/ 66 |   5/ 17  | +   | +
|   3 |  4 |   100 | Doug Hellmann  |
| murano  | +   |   2/  5  | +   | +
|   0 |  2 |39 ||
| neutron |  15/ 73 |  12/ 18  |   2/ 14 |   2/ 13  
|  18 | 18 |   118 | Doug Hellmann  |
| nova| +   

Re: [openstack-dev] [placement] The "intended purpose" of traits

2018-10-02 Thread Eric Fried


On 09/28/2018 07:23 PM, Mohammed Naser wrote:
> On Fri, Sep 28, 2018 at 7:17 PM Chris Dent  wrote:
>>
>> On Fri, 28 Sep 2018, melanie witt wrote:
>>
>>> I'm concerned about a lot of repetition here and maintenance headache for
>>> operators. That's where the thoughts about whether we should provide
>>> something like a key-value construct to API callers where they can instead
>>> say:
>>>
>>> * OWNER=CINDER
>>> * RAID=10
>>> * NUMA_CELL=0
>>>
>>> for each resource provider.
>>>
>>> If I'm off base with my example, please let me know. I'm not a placement
>>> expert.
>>>
>>> Anyway, I hope that gives an idea of what I'm thinking about in this
>>> discussion. I agree we need to pick a direction and go with it. I'm just
>>> trying to look out for the experience operators are going to be using this
>>> and maintaining it in their deployments.
>>
>> Despite saying "let's never do this" with regard to having formal
>> support for key/values in placement, if we did choose to do it (if
>> that's what we chose, I'd live with it), when would we do it? We
>> have a very long backlog of features that are not yet done. I
>> believe (I hope obviously) that we will be able to accelerate
>> placement's velocity with it being extracted, but that won't be
>> enough to suddenly be able to do quickly do all the things we have
>> on the plate.
>>
>> Are we going to make people wait for some unknown amount of time,
>> in the meantime? While there is a grammar that could do some of
>> these things?
>>
>> Unless additional resources come on the scene I don't think is
>> either feasible or reasonable for us to considering doing any model
>> extending at this time (irrespective of the merit of the idea).
>>
>> In some kind of weird belief way I'd really prefer we keep the
>> grammar placement exposes simple, because my experience with HTTP
>> APIs strongly suggests that's very important, and that experience is
>> effectively why I am here, but I have no interest in being a
>> fundamentalist about it. We should argue about it strongly to make
>> sure we get the right result, but it's not a huge deal either way.
> 
> Is there a spec up for this should anyone want to implement it?

By "this" are you referring to a placement key/value primitive?

There is not a spec or blueprint that I'm aware of. And I think the
reason is the strong and immediate resistance to the very idea any time
it is mentioned. Who would want to write a spec that's almost certain to
be vetoed?

> 
>> --
>> Chris Dent   ٩◔̯◔۶   https://anticdent.org/
>> freenode: cdent tw: 
>> @anticdent__
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-02 Thread Eric Fried


On 10/02/2018 11:09 AM, Jim Rollenhagen wrote:
> On Tue, Oct 2, 2018 at 11:40 AM Eric Fried  wrote:
> 
> > What Eric is proposing (and Julia and I seem to be in favor of), is
> > nearly the same as your proposal. The single difference is that these
> > config templates or deploy templates or whatever could *also* require
> > certain traits, and the scheduler would use that information to pick a
> > node. While this does put some scheduling information into the config
> > template, it also means that we can remove some of the flavor
> explosion
> > *and* mostly separate scheduling from configuration.
> >
> > So, you'd have a list of traits on a flavor:
> >
> > required=HW_CPU_X86_VMX,HW_NIC_ACCEL_IPSEC
> >
> > And you would also have a list of traits in the deploy template:
> >
> > {"traits": {"required": ["STORAGE_HARDWARE_RAID"]}, "config":
> }
> >
> > This allows for making flavors that are reasonably flexible
> (instead of
> > two flavors that do VMX and IPSEC acceleration, one of which does
> RAID).
> > It also allows users to specify a desired configuration without also
> > needing to know how to correctly choose a flavor that can handle that
> > configuration.
> >
> > I think it makes a lot of sense, doesn't impose more work on
> users, and
> > can reduce the number of flavors operators need to manage.
> >
> > Does that make sense?
> 
> This is in fact exactly what Jay proposed. And both Julia and I are in
> favor of it as an ideal long-term solution. Where Julia and I deviated
> from Jay's point of view was in our desire to use "the hack" in the
> short term so we can satisfy the majority of use cases right away
> without having to wait for that ideal solution to materialize.
> 
> 
> Ah, good point, I had missed that initially. Thanks. Let's do that.
> 
> So if we all agree Jay's proposal is the right thing to do, is there any
> reason to start working on a short-term hack instead of putting those
> efforts into the better solution? I don't see why we couldn't get that
> done in one cycle, if we're all in agreement on it.

It takes more than agreement, though. It takes resources. I may have
misunderstood a major theme of the PTG, but I think the Nova team is
pretty overextended already. Even assuming authorship by wicked smaaht
folks such as yourself, the spec and code reviews will require a
nontrivial investment from Nova cores. The result would likely be
de-/re-prioritization of things we just got done agreeing to work on. If
that's The Right Thing, so be it. But we can't just say we're going to
move forward with something of this magnitude without sacrificing
something else.

(Note that the above opinion is based on the assumption that the hacky
way will require *much* less spec/code/review bandwidth to accomplish.
If that's not true, then I totally agree with you that we should spend
our time working on the right solution.)

> 
> // jim
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-40

2018-10-02 Thread Steven Dake (stdake)
Chris,

Thanks for all the hard work you have put into this.  FWIW I found value in 
your reports, but perhaps because I am not involved in the daily activities of 
the TC.

Cheers
-steve


On 10/2/18, 8:25 AM, "Chris Dent"  wrote:


HTML: https://anticdent.org/tc-report-18-40.html

I'm going to take a break from writing the TC reports for a while.
If other people (whether on the TC or not) are interested in
producing their own form of a subjective review of the week's TC
activity, I very much encourage you to do so. It's proven an
effective way to help at least some people maintain engagement.

I may pick it up again when I feel like I have sufficient focus and
energy to produce something that has more value and interpretation
than simply pointing at
[the IRC logs](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/).
However, at this time, I'm not producing a product that is worth the
time it takes me to do it and the time it takes away from doing
other things. I'd rather make more significant progress on fewer
things.

In the meantime, please join me in congratulating and welcoming the
newly elected members of the TC: Lance Bragstad, Jean-Philippe
Evrard, Doug Hellman, Julia Kreger, Ghanshyam Mann, and Jeremy
Stanley.


-- 
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-02 Thread Mark Goddard
On Tue, 2 Oct 2018 at 17:10, Jim Rollenhagen  wrote:

> On Tue, Oct 2, 2018 at 11:40 AM Eric Fried  wrote:
>
>> > What Eric is proposing (and Julia and I seem to be in favor of), is
>> > nearly the same as your proposal. The single difference is that these
>> > config templates or deploy templates or whatever could *also* require
>> > certain traits, and the scheduler would use that information to pick a
>> > node. While this does put some scheduling information into the config
>> > template, it also means that we can remove some of the flavor explosion
>> > *and* mostly separate scheduling from configuration.
>> >
>> > So, you'd have a list of traits on a flavor:
>> >
>> > required=HW_CPU_X86_VMX,HW_NIC_ACCEL_IPSEC
>> >
>> > And you would also have a list of traits in the deploy template:
>> >
>> > {"traits": {"required": ["STORAGE_HARDWARE_RAID"]}, "config": > blob>}
>> >
>> > This allows for making flavors that are reasonably flexible (instead of
>> > two flavors that do VMX and IPSEC acceleration, one of which does RAID).
>> > It also allows users to specify a desired configuration without also
>> > needing to know how to correctly choose a flavor that can handle that
>> > configuration.
>> >
>> > I think it makes a lot of sense, doesn't impose more work on users, and
>> > can reduce the number of flavors operators need to manage.
>> >
>> > Does that make sense?
>>
>> This is in fact exactly what Jay proposed. And both Julia and I are in
>> favor of it as an ideal long-term solution. Where Julia and I deviated
>> from Jay's point of view was in our desire to use "the hack" in the
>> short term so we can satisfy the majority of use cases right away
>> without having to wait for that ideal solution to materialize.
>>
>
> Ah, good point, I had missed that initially. Thanks. Let's do that.
>
> So if we all agree Jay's proposal is the right thing to do, is there any
> reason to start working on a short-term hack instead of putting those
> efforts into the better solution? I don't see why we couldn't get that done
> in one cycle, if we're all in agreement on it.
>

I'm still unclear on the ironic side of this. I can see that config of some
sort is stored in glance, and referenced upon nova server creation. Somehow
this would be synced to ironic by the nova virt driver during node
provisioning. The part that's missing in my mind is how to map from a
config in glance to a set of actions performed by ironic. Does the config
in glance reference a deploy template, or a set of ironic deploy steps? Or
does ironic (or OpenStack) define some config schema that it supports, and
use it to generate a set of deploy steps?


> // jim
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-02 Thread Jim Rollenhagen
On Tue, Oct 2, 2018 at 11:40 AM Eric Fried  wrote:

> > What Eric is proposing (and Julia and I seem to be in favor of), is
> > nearly the same as your proposal. The single difference is that these
> > config templates or deploy templates or whatever could *also* require
> > certain traits, and the scheduler would use that information to pick a
> > node. While this does put some scheduling information into the config
> > template, it also means that we can remove some of the flavor explosion
> > *and* mostly separate scheduling from configuration.
> >
> > So, you'd have a list of traits on a flavor:
> >
> > required=HW_CPU_X86_VMX,HW_NIC_ACCEL_IPSEC
> >
> > And you would also have a list of traits in the deploy template:
> >
> > {"traits": {"required": ["STORAGE_HARDWARE_RAID"]}, "config":  blob>}
> >
> > This allows for making flavors that are reasonably flexible (instead of
> > two flavors that do VMX and IPSEC acceleration, one of which does RAID).
> > It also allows users to specify a desired configuration without also
> > needing to know how to correctly choose a flavor that can handle that
> > configuration.
> >
> > I think it makes a lot of sense, doesn't impose more work on users, and
> > can reduce the number of flavors operators need to manage.
> >
> > Does that make sense?
>
> This is in fact exactly what Jay proposed. And both Julia and I are in
> favor of it as an ideal long-term solution. Where Julia and I deviated
> from Jay's point of view was in our desire to use "the hack" in the
> short term so we can satisfy the majority of use cases right away
> without having to wait for that ideal solution to materialize.
>

Ah, good point, I had missed that initially. Thanks. Let's do that.

So if we all agree Jay's proposal is the right thing to do, is there any
reason to start working on a short-term hack instead of putting those
efforts into the better solution? I don't see why we couldn't get that done
in one cycle, if we're all in agreement on it.

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][stable] Stable Core Team Update

2018-10-02 Thread Miguel Lavalle
Hi Stable Team,

I want to nominate Bernard Cafarrelli as a stable core reviewer for Neutron
and related projects. Bernard has been increasing the number of stable
reviews he is doing for the project [1]. Besides that, he is a stable
maintainer downstream for his employer (Red Hat), so he can bring that
valuable experience to the Neutron stable team.

Thanks and regards

Miguel

[1]
https://review.openstack.org/#/q/(project:openstack/neutron+OR+openstack/networking-sfc+OR+project:openstack/networking-ovn)++branch:%255Estable/.*+reviewedby:%22Bernard+Cafarelli+%253Cbcafarel%2540redhat.com%253E%22
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-02 Thread Eric Fried
> What Eric is proposing (and Julia and I seem to be in favor of), is
> nearly the same as your proposal. The single difference is that these
> config templates or deploy templates or whatever could *also* require
> certain traits, and the scheduler would use that information to pick a
> node. While this does put some scheduling information into the config
> template, it also means that we can remove some of the flavor explosion
> *and* mostly separate scheduling from configuration.
> 
> So, you'd have a list of traits on a flavor:
> 
> required=HW_CPU_X86_VMX,HW_NIC_ACCEL_IPSEC
> 
> And you would also have a list of traits in the deploy template:
> 
> {"traits": {"required": ["STORAGE_HARDWARE_RAID"]}, "config": }
> 
> This allows for making flavors that are reasonably flexible (instead of
> two flavors that do VMX and IPSEC acceleration, one of which does RAID).
> It also allows users to specify a desired configuration without also
> needing to know how to correctly choose a flavor that can handle that
> configuration.
> 
> I think it makes a lot of sense, doesn't impose more work on users, and
> can reduce the number of flavors operators need to manage.
> 
> Does that make sense?

This is in fact exactly what Jay proposed. And both Julia and I are in
favor of it as an ideal long-term solution. Where Julia and I deviated
from Jay's point of view was in our desire to use "the hack" in the
short term so we can satisfy the majority of use cases right away
without having to wait for that ideal solution to materialize.

> 
> // jim
> 
> 
> Best,
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 18-40

2018-10-02 Thread Chris Dent


HTML: https://anticdent.org/tc-report-18-40.html

I'm going to take a break from writing the TC reports for a while.
If other people (whether on the TC or not) are interested in
producing their own form of a subjective review of the week's TC
activity, I very much encourage you to do so. It's proven an
effective way to help at least some people maintain engagement.

I may pick it up again when I feel like I have sufficient focus and
energy to produce something that has more value and interpretation
than simply pointing at
[the IRC logs](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/).
However, at this time, I'm not producing a product that is worth the
time it takes me to do it and the time it takes away from doing
other things. I'd rather make more significant progress on fewer
things.

In the meantime, please join me in congratulating and welcoming the
newly elected members of the TC: Lance Bragstad, Jean-Philippe
Evrard, Doug Hellman, Julia Kreger, Ghanshyam Mann, and Jeremy
Stanley.


--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][tripleo] Tenks

2018-10-02 Thread Ben Nemec



On 10/2/18 9:37 AM, Mark Goddard wrote:



On Tue, 2 Oct 2018 at 14:03, Jay Pipes > wrote:


On 10/02/2018 08:58 AM, Mark Goddard wrote:
 > Hi,
 >
 > In the most recent Ironic meeting we discussed [1] tenks, and the
 > possibility of adding the project under Ironic governance. We
agreed to
 > move the discussion to the mailing list. I'll introduce the
project here
 > and give everyone a chance to ask questions. If things appear to
move in
 > the right direction, I'll propose a vote for inclusion under
Ironic's
 > governance.
 >
 > Tenks is a project for managing 'virtual bare metal clusters'. It
aims
 > to be a drop-in replacement for the various scripts and templates
that
 > exist in the Ironic devstack plugin for creating VMs to act as bare
 > metal nodes in development and test environments. Similar code
exists in
 > Bifrost and TripleO, and probably other places too. By focusing
on one
 > project, we can ensure that it works well, and provides all the
features
 > necessary as support for bare metal in the cloud evolves.
 >
 > That's tenks the concept. Tenks in reality today is a working
version
 > 1.0, written in Ansible, built by Will Miller (w-miller) during his
 > summer placement. Will has returned to his studies, and Will Szumski
 > (jovial) has picked it up. You don't have to be called Will to
work on
 > Tenks, but it helps.
 >
 > There are various resources available for anyone wishing to find
out more:
 >
 > * Ironic spec review: https://review.openstack.org/#/c/579583
 > * Documentation: https://tenks.readthedocs.io/en/latest/
 > * Source code: https://github.com/stackhpc/tenks
 > * Blog: https://stackhpc.com/tenks.html
 > * IRC: mgoddard or jovial in #openstack-ironic
 >
 > What does everyone think? Is this something that the ironic
community
 > could or should take ownership of?

How does Tenks relate to OVB?


https://openstack-virtual-baremetal.readthedocs.io/en/latest/introduction.html


Good question. As far as I'm aware, OVB is a tool for using an OpenStack 
cloud to host the virtual bare metal nodes, and is typically used for 
testing TripleO. Tenks does not rule out supporting this use case in 
future, but currently operates more like the Ironic devstack plugin, 
using libvirt/KVM/QEMU as the virtualisation provider.


Yeah, sounds like this is more a replacement for the kvm virtual 
environment setup in tripleo-quickstart. I'm adding the tripleo tag for 
their attention.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-02 Thread Jim Rollenhagen
On Mon, Oct 1, 2018 at 6:38 PM Jay Pipes  wrote:

> On 10/01/2018 06:04 PM, Julia Kreger wrote:
> > On Mon, Oct 1, 2018 at 2:41 PM Eric Fried  wrote:
>
> 

>
> > That said, what if it was:
> >
> >   openstack config-profile create --name BOOT_MODE_UEFI --json -
> >   {
> >"type": "boot_mode_scheme",
> >"version": 123,
> >"object": {
> >"boot_mode": "uefi"
> >},
> >"placement": {
> > "traits": {
> >  "required": [
> >   "BOOT_MODE_UEFI"
> >  ]
> > }
> >}
> >   }
> >   ^D
> >
> > And now you could in fact say
> >
> >   openstack server create --flavor foo --config-profile
> BOOT_MODE_UEFI
> >
> > using the profile name, which happens to be the same as the trait
> name
> > because you made it so. Does that satisfy the yen for saying it
> once? (I
> > mean, despite the fact that you first had to say it three times to
> get
> > it set up.)
> >
> 
> >
> > I feel like it might be confusing, but totally +1 to matching required
> > trait name being a thing. That way scheduling is completely decoupled
> > and if everything was correct then the request should already be
> > scheduled properly.
>
> I guess I'll just drop the idea of doing this properly then. It's true
> that the placement traits concept can be hacked up and the virt driver
> can just pass a list of trait strings to the Ironic API and that's the
> most expedient way to get what the 90% of people apparently want. It's
> also true that it will add a bunch of unmaintainable tribal knowledge
> into the interface between Nova and Ironic, but that has been the case
> for multiple years.
>
> The flavor explosion problem will continue to get worse for those of us
> who deal with its pain (Oath in particular feels this) because the
> interface between nova flavors and Ironic instance capabilities will
> continue to be super-tightly-coupled.
>
> For the record, I would have been happier if someone had proposed
> separating the instance configuration data in the flavor extra-specs
> from the notion of required placement constraints (i.e. traits). You
> could call the extra_spec "deploy_template_id" if you wanted and that
> extra spec value could have been passed to Ironic during node
> provisioning instead of the list of placement constraints (traits).
>
> So, you'd have a list of actual placement traits for an instance that
> looked like this:
>
> required=BOOT_MODE_UEFI,STORAGE_HARDWARE_RAID
>
> and you'd have a flavor extra spec called "deploy_template_id" with a
> value of the deploy template configuration data you wanted to
> communicate to Ironic. The Ironic virt driver could then just look for
> the "deploy_template_id" extra spec and pass the value of that to the
> Ironic API instead of passing a list of traits.
>
> That would have at least satisfied my desire to separate configuration
> data from placement constraints.
>
> Anyway, I'm done trying to please my own desires for a clean solution to
> this.
>

Jay, please don't stop - I think we aren't expressing ourselves well, or
you're missing something, or both. I understand this is a frustrating
conversation for everyone. But I think we're making good progress on the
end goal (whether or not we do an intermediate step that hacks on top of
traits). We all want a clean solution to this.

What Eric is proposing (and Julia and I seem to be in favor of), is nearly
the same as your proposal. The single difference is that these config
templates or deploy templates or whatever could *also* require certain
traits, and the scheduler would use that information to pick a node. While
this does put some scheduling information into the config template, it also
means that we can remove some of the flavor explosion *and* mostly separate
scheduling from configuration.

So, you'd have a list of traits on a flavor:

required=HW_CPU_X86_VMX,HW_NIC_ACCEL_IPSEC

And you would also have a list of traits in the deploy template:

{"traits": {"required": ["STORAGE_HARDWARE_RAID"]}, "config": }

This allows for making flavors that are reasonably flexible (instead of two
flavors that do VMX and IPSEC acceleration, one of which does RAID). It
also allows users to specify a desired configuration without also needing
to know how to correctly choose a flavor that can handle that configuration.

I think it makes a lot of sense, doesn't impose more work on users, and can
reduce the number of flavors operators need to manage.

Does that make sense?

// jim


> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [ironic] Tenks

2018-10-02 Thread Mark Goddard
On Tue, 2 Oct 2018 at 14:03, Jay Pipes  wrote:

> On 10/02/2018 08:58 AM, Mark Goddard wrote:
> > Hi,
> >
> > In the most recent Ironic meeting we discussed [1] tenks, and the
> > possibility of adding the project under Ironic governance. We agreed to
> > move the discussion to the mailing list. I'll introduce the project here
> > and give everyone a chance to ask questions. If things appear to move in
> > the right direction, I'll propose a vote for inclusion under Ironic's
> > governance.
> >
> > Tenks is a project for managing 'virtual bare metal clusters'. It aims
> > to be a drop-in replacement for the various scripts and templates that
> > exist in the Ironic devstack plugin for creating VMs to act as bare
> > metal nodes in development and test environments. Similar code exists in
> > Bifrost and TripleO, and probably other places too. By focusing on one
> > project, we can ensure that it works well, and provides all the features
> > necessary as support for bare metal in the cloud evolves.
> >
> > That's tenks the concept. Tenks in reality today is a working version
> > 1.0, written in Ansible, built by Will Miller (w-miller) during his
> > summer placement. Will has returned to his studies, and Will Szumski
> > (jovial) has picked it up. You don't have to be called Will to work on
> > Tenks, but it helps.
> >
> > There are various resources available for anyone wishing to find out
> more:
> >
> > * Ironic spec review: https://review.openstack.org/#/c/579583
> > * Documentation: https://tenks.readthedocs.io/en/latest/
> > * Source code: https://github.com/stackhpc/tenks
> > * Blog: https://stackhpc.com/tenks.html
> > * IRC: mgoddard or jovial in #openstack-ironic
> >
> > What does everyone think? Is this something that the ironic community
> > could or should take ownership of?
>
> How does Tenks relate to OVB?
>
>
> https://openstack-virtual-baremetal.readthedocs.io/en/latest/introduction.html


Good question. As far as I'm aware, OVB is a tool for using an OpenStack
cloud to host the virtual bare metal nodes, and is typically used for
testing TripleO. Tenks does not rule out supporting this use case in
future, but currently operates more like the Ironic devstack plugin, using
libvirt/KVM/QEMU as the virtualisation provider.


>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Tenks

2018-10-02 Thread Jay Pipes

On 10/02/2018 08:58 AM, Mark Goddard wrote:

Hi,

In the most recent Ironic meeting we discussed [1] tenks, and the 
possibility of adding the project under Ironic governance. We agreed to 
move the discussion to the mailing list. I'll introduce the project here 
and give everyone a chance to ask questions. If things appear to move in 
the right direction, I'll propose a vote for inclusion under Ironic's 
governance.


Tenks is a project for managing 'virtual bare metal clusters'. It aims 
to be a drop-in replacement for the various scripts and templates that 
exist in the Ironic devstack plugin for creating VMs to act as bare 
metal nodes in development and test environments. Similar code exists in 
Bifrost and TripleO, and probably other places too. By focusing on one 
project, we can ensure that it works well, and provides all the features 
necessary as support for bare metal in the cloud evolves.


That's tenks the concept. Tenks in reality today is a working version 
1.0, written in Ansible, built by Will Miller (w-miller) during his 
summer placement. Will has returned to his studies, and Will Szumski 
(jovial) has picked it up. You don't have to be called Will to work on 
Tenks, but it helps.


There are various resources available for anyone wishing to find out more:

* Ironic spec review: https://review.openstack.org/#/c/579583
* Documentation: https://tenks.readthedocs.io/en/latest/
* Source code: https://github.com/stackhpc/tenks
* Blog: https://stackhpc.com/tenks.html
* IRC: mgoddard or jovial in #openstack-ironic

What does everyone think? Is this something that the ironic community 
could or should take ownership of?


How does Tenks relate to OVB?

https://openstack-virtual-baremetal.readthedocs.io/en/latest/introduction.html

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Gerrit User Summit, November 2018

2018-10-02 Thread Adam Spiers

Hi all,

The next forthcoming Gerrit User Summit 2018 will be Nov 15th-16th in
Palo Alto, hosted by Cloudera.

See the Gerrit User Summit page at:

   https://gerrit.googlesource.com/summit/2018/+/master/index.md

and the event registration at:

   https://gus2018.eventbrite.com

Hopefully some members of the OpenStack community can attend the
event, not just so we can keep up to date with Gerrit but also so that
our interests can be represented!

Regards,
Adam

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Tenks

2018-10-02 Thread Mark Goddard
Hi,

In the most recent Ironic meeting we discussed [1] tenks, and the
possibility of adding the project under Ironic governance. We agreed to
move the discussion to the mailing list. I'll introduce the project here
and give everyone a chance to ask questions. If things appear to move in
the right direction, I'll propose a vote for inclusion under Ironic's
governance.

Tenks is a project for managing 'virtual bare metal clusters'. It aims to
be a drop-in replacement for the various scripts and templates that exist
in the Ironic devstack plugin for creating VMs to act as bare metal nodes
in development and test environments. Similar code exists in Bifrost and
TripleO, and probably other places too. By focusing on one project, we can
ensure that it works well, and provides all the features necessary as
support for bare metal in the cloud evolves.

That's tenks the concept. Tenks in reality today is a working version 1.0,
written in Ansible, built by Will Miller (w-miller) during his summer
placement. Will has returned to his studies, and Will Szumski (jovial) has
picked it up. You don't have to be called Will to work on Tenks, but it
helps.

There are various resources available for anyone wishing to find out more:

* Ironic spec review: https://review.openstack.org/#/c/579583
* Documentation: https://tenks.readthedocs.io/en/latest/
* Source code: https://github.com/stackhpc/tenks
* Blog: https://stackhpc.com/tenks.html
* IRC: mgoddard or jovial in #openstack-ironic

What does everyone think? Is this something that the ironic community could
or should take ownership of?

[1]
http://eavesdrop.openstack.org/meetings/ironic/2018/ironic.2018-10-01-15.00.log.html#l-170

Thanks,
Mark
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [k8s][magnum][zun] Notification of removal of in-tree K8s OpenStack Provider

2018-10-02 Thread Chris Hoge
For those projects that use OpenStack as a cloud provider for K8s, there
is a patch in flight[1] to remove the in-tree OpenStack provider from the
kubernetes/kubernetes repository. The provider has been deprecated for
two releases, with a replacement external provider available[2]. Before
we merge this patch for the 1.13 K8s release cycle, we want to make sure
that projects dependent on the in-tree provider (expecially thinking
about projects like Magnum and Zun) have an opportunity to express their
readiness to switch over.

[1] https://github.com/kubernetes/kubernetes/pull/67782
[2] https://github.com/kubernetes/cloud-provider-openstack


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-02 Thread John Garbutt
Back to the deprecation for a moment...

My plan was to tell folks to use Traits to influence placement
decisions, rather than capabilities.

We probably can't remove the feature till we have deploy templates,
but it seems wrong not to warn our users to avoid using capabilities,
when 80% of the use cases can be moved to traits today, and give you
better performance, etc.

Thoughts?

On Mon, 1 Oct 2018 at 22:42, Eric Fried  wrote:
> I do want to zoom out a bit and point out that we're talking about
> implementing a new framework of substantial size and impact when the
> original proposal - using the trait for both - would just work out of
> the box today with no changes in either API. Is it really worth it?

Yeah, I think the simpler solution deals with a lot of the cases right now.

Personally, I see using traits as about hiding complexity from the end
user (not the operator). End users are requesting a host with a given
capability (via flavor, image or otherwise), and they don't really
care if the operator has statically configured it, or Ironic
dynamically configures it. Operator still statically configures what
deploy templates are possible on what nodes (last time I read the
spec).

For the common cases, I see us adding standard traits. They would also
be useful to pick between nodes that are statically configured one way
or the other. (Although MarkG keeps telling me (in a British way) that
is probably rubbish, and he might be right...)

I am +1 Jay's idea for the more complicated cases (a bit like what
jroll was saying). For me, the user (gets bad interop and) has no
visibility into what the crazy custom trait means (i.e. the LAYOUT_Y
in efried's example). A validated blob in Glare doesn't seem terrible
for that special case. But generally that seems like quite a different
use case, and its tempting to focus on something well typed that is
disk configuration specific. Although, it is tempting not to block the
simpler solution, while we work out how people use this for real.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper"

2018-10-02 Thread Chris Dent

On Wed, 19 Sep 2018, Monty Taylor wrote:

Yes. Your life will be much better if you do not make more legacy jobs. They 
are brittle and hard to work with.


New jobs should either use the devstack base job, the devstack-tempest base 
job or the devstack-tox-functional base job - depending on what things are 
intended.


I have a thing mostly working at https://review.openstack.org/#/c/601614/

The commit message has some ideas on how it could be better and the
various hacks I needed to do to get things working.

One of the comments in there is about the idea of making a zuul job
which is effectively "run the gabbits in these dirs" against a
tempest set up. Doing so will require some minor changes to the
tempest tox passenv settings but I think it ought to
straightforwardish.

Some reviews from people who understand these things more than me
would be most welcome.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-02 Thread Dmitry Tantsur

On 10/2/18 12:36 AM, Jay Pipes wrote:

On 10/01/2018 06:04 PM, Julia Kreger wrote:

On Mon, Oct 1, 2018 at 2:41 PM Eric Fried  wrote:


 > So say the user requests a node that supports UEFI because their
    image
 > needs UEFI. Which workflow would you want here?
 >
 > 1) The operator (or ironic?) has already configured the node to
    boot in
 > UEFI mode. Only pre-configured nodes advertise the "supports
    UEFI" trait.
 >
 > 2) Any node that supports UEFI mode advertises the trait. Ironic
    ensures
 > that UEFI mode is enabled before provisioning the machine.
 >
 > I imagine doing #2 by passing the traits which were specifically
 > requested by the user, from Nova to Ironic, so that Ironic can do the
 > right thing for the user.
 >
 > Your proposal suggests that the user request the "supports UEFI"
    trait,
 > and *also* pass some glance UUID which the user understands will make
 > sure the node actually boots in UEFI mode. Something like:
 >
 > openstack server create --flavor METAL_12CPU_128G --trait
    SUPPORTS_UEFI
 > --config-data $TURN_ON_UEFI_UUID
 >
 > Note that I pass --trait because I hope that will one day be
    supported
 > and we can slow down the flavor explosion.

    IMO --trait would be making things worse (but see below). I think UEFI
    with Jay's model would be more like:

   openstack server create --flavor METAL_12CPU_128G --config-data $UEFI

    where the UEFI profile would be pretty trivial, consisting of
    placement.traits.required = ["BOOT_MODE_UEFI"] and object.boot_mode =
    "uefi".

    I agree that this seems kind of heavy, and that it would be nice to be
    able to say "boot mode is UEFI" just once. OTOH I get Jay's point that
    we need to separate the placement decision from the instance
    configuration.

    That said, what if it was:

  openstack config-profile create --name BOOT_MODE_UEFI --json -
  {
   "type": "boot_mode_scheme",
   "version": 123,
   "object": {
       "boot_mode": "uefi"
   },
   "placement": {
    "traits": {
     "required": [
      "BOOT_MODE_UEFI"
     ]
    }
   }
  }
  ^D

    And now you could in fact say

  openstack server create --flavor foo --config-profile BOOT_MODE_UEFI

    using the profile name, which happens to be the same as the trait name
    because you made it so. Does that satisfy the yen for saying it once? (I
    mean, despite the fact that you first had to say it three times to get
    it set up.)

    

    I do want to zoom out a bit and point out that we're talking about
    implementing a new framework of substantial size and impact when the
    original proposal - using the trait for both - would just work out of
    the box today with no changes in either API. Is it really worth it?


+1000. Reading both of these threads, it feels like we're basically trying to 
make something perfect. I think that is a fine goal, except it is unrealistic 
because the enemy of good is perfection.


    

    By the way, with Jim's --trait suggestion, this:

 > ...dozens of flavors that look like this:
 > - 12CPU_128G_RAID10_DRIVE_LAYOUT_X
 > - 12CPU_128G_RAID5_DRIVE_LAYOUT_X
 > - 12CPU_128G_RAID01_DRIVE_LAYOUT_X
 > - 12CPU_128G_RAID10_DRIVE_LAYOUT_Y
 > - 12CPU_128G_RAID5_DRIVE_LAYOUT_Y
 > - 12CPU_128G_RAID01_DRIVE_LAYOUT_Y

    ...could actually become:

  openstack server create --flavor 12CPU_128G --trait $WHICH_RAID
    --trait
    $WHICH_LAYOUT

    No flavor explosion.


++ I believe this was where this discussion kind of ended up in.. ?Dublin?

The desire and discussion that led us into complex configuration templates and 
profiles being submitted were for highly complex scenarios where users wanted 
to assert detailed raid configurations to disk. Naturally, there are many 
issues there. The ability to provide such detail would be awesome for those 
10% of operators that need such functionality. Of course, if that is the only 
path forward, then we delay the 90% from getting the minimum viable feature 
they need.



    (Maybe if we called it something other than --trait, like maybe
    --config-option, it would let us pretend we're not really overloading a
    trait to do config - it's just a coincidence that the config option has
    the same name as the trait it causes to be required.)


I feel like it might be confusing, but totally +1 to matching required trait 
name being a thing. That way scheduling is completely decoupled and if 
everything was correct then the request should already be scheduled properly.


I guess I'll just drop the idea of doing this properly then. It's true that the 
placement traits concept can be hacked up and the virt driver can just pass a 
list of trait strings to the Ironic API and that's the most expedient way to get 
what the 90% of people apparently want. It's also 

Re: [openstack-dev] [neutron][stadium][networking] Seeking proposals for non-voting Stadium projects in Neutron check queue

2018-10-02 Thread thomas.morin

Hi Miguel, all,

The initiative is very welcome and will help make it more efficient to 
develop in stadium projects.


legacy-tempest-dsvm-networking-bgpvpn-bagpipe would be a candidate, for 
networking-bgpvpn and networking-bagpipe (it covers API and scenario 
tests for the BGPVPN API (networking-bgpvpn) and given that 
networking-bagpipe is used as reference driver, it exercises a large 
portion of networking-bagpipe as well).


Having this one will help a lot.

Thanks,

-Thomas


On 9/30/18 2:42 AM, Miguel Lavalle wrote:

Dear networking Stackers,

During the recent PTG in Denver, we discussed measures to prevent 
patches merged in the Neutron repo breaking Stadium and related 
networking projects in general. We decided to implement the following:


1) For Stadium projects, we want to add non-voting jobs to the Neutron 
check queue

2) For non stadium projects, we are inviting them to add 3rd party CI jobs

The next step is for each project to propose the jobs that they want 
to run against Neutron patches.


Best regards

Miguel

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack/os-log-merger failed

2018-10-02 Thread Miguel Angel Ajo Pelayo
Thanks for the info Doug.

On Mon, Oct 1, 2018 at 6:25 PM Doug Hellmann  wrote:

> Miguel Angel Ajo Pelayo  writes:
>
> > Thank you for the guidance and ping Doug.
> >
> > Was this triggered by [1] ? or By the 1.1.0 tag pushed to gerrit?
>
> The release jobs are always triggered by the git tagging event. The
> patches in openstack/releases run a job that adds tags, but the patch
> you linked to hasn't been merged yet, so it looks like it was caused by
> pushing the tag manually.
>
> Doug
>


-- 
Miguel Ángel Ajo
OSP / Networking DFG, OVN Squad Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev