Re: [openstack-dev] [all] 2019 summit during May holidays?

2018-11-05 Thread Dmitry Tantsur
On Mon, Nov 5, 2018, 20:07 Julia Kreger  *removes all of the hats*
>
> *removes years of dust from unrelated event planning hat, and puts it on
> for a moment*
>
> In my experience, events of any nature where convention venue space is
> involved, are essentially set in stone before being publicly advertised as
> contracts are put in place for hotel room booking blocks as well as the
> convention venue space. These spaces are also typically in a relatively
> high demand limiting the access and available times to schedule. Often
> venues also give preference (and sometimes even better group discounts) to
> repeat events as they are typically a known entity and will have somewhat
> known needs so the venue and hotel(s) can staff appropriately.
>
> tl;dr, I personally wouldn't expect any changes to be possible at this
> point.
>
> *removes event planning hat of past life, puts personal scheduling hat on*
>
> I imagine that as a community, it is near impossible to schedule something
> avoiding holidays for everyone in the community.
>

I'm not taking about everyone. And I'm mostly fine with my holiday, but the
conflicts with Russia and Japan seem huge. This certainly does not help our
effort to engage people outside of NA/EU.

Quick googling suggests that the week of May 13th would have much fewer
conflicts.


> I personally have lost count of the number of holidays and special days
> that I've spent on business trips over the past four years. While I may be
> an out-lier in my feelings on this subject, I'm not upset, annoyed, or even
> bitter about lost times. This community is part of my family.
>

Sure :)

But outside of our small nice circle there is a huge world of people who
may not share our feeling and the level of commitment to openstack. These
occasional contributors we talked about when discussing the cycle length. I
don't think asking them to abandon 3-5 days of holidays is a productive way
to engage them.

And again, as much as I love meeting you all, I think we're outgrowing the
format of these meetings..

Dmitry


> -Julia
>
> On Mon, Nov 5, 2018 at 8:19 AM Dmitry Tantsur  wrote:
>
>> Hi all,
>>
>> Not sure how official the information about the next summit is, but it's
>> on the
>> web site [1], so I guess worth asking..
>>
>> Are we planning for the summit to overlap with the May holidays? The 1st
>> of May
>> is a holiday in big part of the world. We ask people to skip it in
>> addition to
>> 3+ weekend days they'll have to spend working and traveling.
>>
>> To make it worse, 1-3 May are holidays in Russia this time. To make it
>> even
>> worse than worse, the week of 29th is the Golden Week in Japan [2]. Was
>> it
>> considered? Is it possible to move the days to less conflicting time
>> (mid-May
>> maybe)?
>>
>> Dmitry
>>
>> [1] https://www.openstack.org/summit/denver-2019/
>> [2] https://en.wikipedia.org/wiki/Golden_Week_(Japan)
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] 2019 summit during May holidays?

2018-11-05 Thread Dmitry Tantsur

Hi all,

Not sure how official the information about the next summit is, but it's on the 
web site [1], so I guess worth asking..


Are we planning for the summit to overlap with the May holidays? The 1st of May 
is a holiday in big part of the world. We ask people to skip it in addition to 
3+ weekend days they'll have to spend working and traveling.


To make it worse, 1-3 May are holidays in Russia this time. To make it even 
worse than worse, the week of 29th is the Golden Week in Japan [2]. Was it 
considered? Is it possible to move the days to less conflicting time (mid-May 
maybe)?


Dmitry

[1] https://www.openstack.org/summit/denver-2019/
[2] https://en.wikipedia.org/wiki/Golden_Week_(Japan)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Team gathering at the Forum

2018-11-03 Thread Dmitry Tantsur

Hi Ironicers!

Good news: I have made the reservation for the gathering! :) It will happen on 
Wednesday, November 14, 2018 at 7 p.m. in restaurant Lindenbräu am Potsdamer 
Platz (https://goo.gl/maps/DYb5ikGGmdw). I will depart from the venue at around 
6:15. Follow the hangouts chat (to be set up) for any last-minute changes.


If you go along, you will need to get to Potsdamer Platz, there are S-Bahn, 
U-Bahn, train and bus stations there. Google suggests we take S3/S9 (direction 
Erkner/Airport) from Messe Süd first, then switch to S1/S2/S25/S26 on 
Friedrichstraße. Those going from Crowne Plaza Hotel can take bus 200 directly 
to Postdamer Platz. You'll need an A-B zone ticket for the travel. The 
restaurant is located in a court behind the tall DB building.


If you want to come but did not sign up in the doodle, please let me know 
off-list.

See you in Berlin,
Dmitry

On 10/29/18 3:58 PM, Dmitry Tantsur wrote:

Hi folks!

This is your friendly reminder to vote on the day. Even if you're fine with all 
days, please leave a vote, so that we know how many people are coming. We will 
need to make a reservation, and we may not be able to accommodate more people 
than voted!


Dmitry

On 10/22/18 6:06 PM, Dmitry Tantsur wrote:

Hi ironicers! :)

We are trying to plan an informal Ironic team gathering in Berlin. If you care 
about Ironic and would like to participate, please fill in 
https://doodle.com/poll/iw5992px765nthde. Note that the location is tentative, 
also depending on how many people sign up.


Dmitry





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Team gathering at the Forum?

2018-11-01 Thread Dmitry Tantsur

Hi,

You mean Lindenbrau, right? I was thinking of changing the venue, but if you're 
there, I think we should go there too! I just hope they can still fit 15 ppl :) 
Will try reserving tomorrow.


Dmitry

On 11/1/18 12:11 AM, Stig Telfer wrote:

Hello Ironicers -

We’ve booked the same venue for the Scientific SIG for Wednesday evening, and 
hopefully we’ll see you there.  There’s plenty of cross-over between our 
groups, particularly at an operator level.

Cheers,
Stig



On 29 Oct 2018, at 14:58, Dmitry Tantsur  wrote:

Hi folks!

This is your friendly reminder to vote on the day. Even if you're fine with all 
days, please leave a vote, so that we know how many people are coming. We will 
need to make a reservation, and we may not be able to accommodate more people 
than voted!

Dmitry

On 10/22/18 6:06 PM, Dmitry Tantsur wrote:

Hi ironicers! :)
We are trying to plan an informal Ironic team gathering in Berlin. If you care 
about Ironic and would like to participate, please fill in 
https://doodle.com/poll/iw5992px765nthde. Note that the location is tentative, 
also depending on how many people sign up.
Dmitry



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ironic integration CI jobs

2018-10-31 Thread Dmitry Tantsur

On 10/31/18 2:57 PM, Julia Kreger wrote:



On Wed, Oct 31, 2018 at 5:44 AM Dmitry Tantsur <mailto:dtant...@redhat.com>> wrote:


Hi,

On 10/31/18 1:36 AM, Julia Kreger wrote:
[trim]
 >
 > ironic-tempest-dsvm-ipa-wholedisk-agent_ipmitool-tinyipa-multinode - This
job is
 > essentially the same as our grenade mutlinode job, the only difference 
being
 > grenade.

Nope, not the same. Grenade jobs run only smoke tests, this job runs

https://github.com/openstack/ironic-tempest-plugin/blob/master/ironic_tempest_plugin/tests/scenario/test_baremetal_multitenancy.py


Ugh, Looking closer, we still end up deploying when the smoke tests run. It 
feels like the only real difference between what is being exercised is that one 
our explicit test scenario of putting two instances on two separate networks and 
validating connectivity is not present between the two. I guess I'm failing to 
see why we need all of the setup and infrastructure when we're just testing 
pluggable network bits and settings their upon. Maybe it is a good cantidate for 
looking at evolving how we handle scenario testing so we reduce our gate load 
and resulting wait for test results.


 > ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa - This job
 > essentially just duplicates the functionality already covered in other 
jobs,
 > including the grenade job.

Ditto, grenade jobs do not cover our tests at all. Also this is the very 
job we
run on other projects (nova, neutron, maybe more), so it will be a bit 
painful
to remove it.


We run the basic baremetal ops test, which tests deploy. If we're already 
covering the same code paths in other tests (which I feel we are), then the test 
feels redundant to me. I'm not worried about the effort to change the job in 
other gates. We really need to pull agent_ipmitool out of the name if we keep it 
anyway... which still means going through zuul configs.


Do not smoke tests cover rescue with bare metal? Because our jobs do.



[trim]


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ironic integration CI jobs

2018-10-31 Thread Dmitry Tantsur

Hi,

On 10/31/18 1:36 AM, Julia Kreger wrote:
With the discussion of CI jobs and the fact that I have been finding myself 
checking job status several times a day so early in the cycle, I think it is 
time for ironic to revisit many of our CI jobs.


The bottom line is ironic is very resource intensive to test. A lot of that is 
because of the underlying way we enroll/manage nodes and then execute the 
integration scenarios emulating bare metal. I think we can improve that with 
some ansible.


In the mean time I created a quick chart[1] to try and make sense out overall 
integration coverage and I think it makes sense to remove three of the jobs.


ironic-tempest-dsvm-ipa-wholedisk-agent_ipmitool-tinyipa-multinode - This job is 
essentially the same as our grenade mutlinode job, the only difference being 
grenade.


Nope, not the same. Grenade jobs run only smoke tests, this job runs 
https://github.com/openstack/ironic-tempest-plugin/blob/master/ironic_tempest_plugin/tests/scenario/test_baremetal_multitenancy.py


ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa - This job 
essentially just duplicates the functionality already covered in other jobs, 
including the grenade job.


Ditto, grenade jobs do not cover our tests at all. Also this is the very job we 
run on other projects (nova, neutron, maybe more), so it will be a bit painful 
to remove it.


ironic-tempest-dsvm-bfv - This presently non-voting job validates that the iPXE 
mode of the 'pxe' boot interface supports boot from volume. It was superseded by 
ironic-tempest-dsvm-ipxe-bfv which focuses on the use of the 'ipxe' boot 
interface. The underlying code is all the same deep down in all of the helper 
methods.


+1 to this.

Dmitry



I'll go ahead and put this up as a topic for our weekly meeting next week so we 
can discuss.


Thanks,

-Julia

[1]: https://ethercalc.openstack.org/ces0z3xjb1ir

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Team gathering at the Forum?

2018-10-29 Thread Dmitry Tantsur

Hi folks!

This is your friendly reminder to vote on the day. Even if you're fine with all 
days, please leave a vote, so that we know how many people are coming. We will 
need to make a reservation, and we may not be able to accommodate more people 
than voted!


Dmitry

On 10/22/18 6:06 PM, Dmitry Tantsur wrote:

Hi ironicers! :)

We are trying to plan an informal Ironic team gathering in Berlin. If you care 
about Ironic and would like to participate, please fill in 
https://doodle.com/poll/iw5992px765nthde. Note that the location is tentative, 
also depending on how many people sign up.


Dmitry



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Team gathering at the Forum?

2018-10-22 Thread Dmitry Tantsur

Hi ironicers! :)

We are trying to plan an informal Ironic team gathering in Berlin. If you care 
about Ironic and would like to participate, please fill in 
https://doodle.com/poll/iw5992px765nthde. Note that the location is tentative, 
also depending on how many people sign up.


Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [FEMDC] [Edge] [tripleo] On the use of terms "Edge" and "Far Edge"

2018-10-18 Thread Dmitry Tantsur

Hi all,

Sorry for chiming in really late in this topic, but I think $subj is worth 
discussing until we settle harder on the potentially confusing terminology.


I think the difference between "Edge" and "Far Edge" is too vague to use these 
terms in practice. Think about the "edge" metaphor itself: something rarely has 
several layers of edges. A knife has an edge, there are no far edges. I imagine 
zooming in and seeing more edges at the edge, and then it's quite cool indeed, 
but is it really a useful metaphor for those who never used a strong microscope? :)


I think in the trivial sense "Far Edge" is a tautology, and should be avoided. 
As a weak proof of my words, I already see a lot of smart people confusing these 
two and actually use Central/Edge where they mean Edge/Far Edge. I suggest we 
adopt a different terminology, even if it less consistent with typical marketing 
term around the "Edge" movement.


Now, I don't have really great suggestions. Something that came up in TripleO 
discussions [1] is Core/Hub/Edge, which I think reflects the idea better.


I'd be very interested to hear your ideas.

Dmitry

[1] https://etherpad.openstack.org/p/tripleo-edge-mvp

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo

2018-10-18 Thread Dmitry Tantsur

On 10/17/18 5:59 PM, Joshua Harlow wrote:

Dmitry Tantsur wrote:

On 10/10/18 7:41 PM, Greg Hill wrote:

I've been out of the openstack loop for a few years, so I hope this
reaches the right folks.

Josh Harlow (original author of taskflow and related libraries) and I
have been discussing the option of moving taskflow out of the
openstack umbrella recently. This move would likely also include the
futurist and automaton libraries that are primarily used by taskflow.


Just for completeness: futurist and automaton are also heavily relied on
by ironic without using taskflow.


When did futurist get used??? nice :)

(I knew automaton was, but maybe I knew futurist was to and I forgot, lol).


I'm pretty sure you did, it happened back in Mitaka :)






The idea would be to just host them on github and use the regular
Github features for Issues, PRs, wiki, etc, in the hopes that this
would spur more development. Taskflow hasn't had any substantial
contributions in several years and it doesn't really seem that the
current openstack devs have a vested interest in moving it forward. I
would like to move it forward, but I don't have an interest in being
bound by the openstack workflow (this is why the project stagnated as
core reviewers were pulled on to other projects and couldn't keep up
with the review backlog, so contributions ground to a halt).

I guess I'm putting it forward to the larger community. Does anyone
have any objections to us doing this? Are there any non-obvious
technicalities that might make such a transition difficult? Who would
need to be made aware so they could adjust their own workflows?

Or would it be preferable to just fork and rename the project so
openstack can continue to use the current taskflow version without
worry of us breaking features?

Greg


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][taskflow] Thoughts on moving taskflow out of openstack/oslo

2018-10-15 Thread Dmitry Tantsur

On 10/10/18 7:41 PM, Greg Hill wrote:
I've been out of the openstack loop for a few years, so I hope this reaches the 
right folks.


Josh Harlow (original author of taskflow and related libraries) and I have been 
discussing the option of moving taskflow out of the openstack umbrella recently. 
This move would likely also include the futurist and automaton libraries that 
are primarily used by taskflow.


Just for completeness: futurist and automaton are also heavily relied on by 
ironic without using taskflow.


The idea would be to just host them on github 
and use the regular Github features for Issues, PRs, wiki, etc, in the hopes 
that this would spur more development. Taskflow hasn't had any substantial 
contributions in several years and it doesn't really seem that the current 
openstack devs have a vested interest in moving it forward. I would like to move 
it forward, but I don't have an interest in being bound by the openstack 
workflow (this is why the project stagnated as core reviewers were pulled on to 
other projects and couldn't keep up with the review backlog, so contributions 
ground to a halt).


I guess I'm putting it forward to the larger community. Does anyone have any 
objections to us doing this? Are there any non-obvious technicalities that might 
make such a transition difficult? Who would need to be made aware so they could 
adjust their own workflows?


Or would it be preferable to just fork and rename the project so openstack can 
continue to use the current taskflow version without worry of us breaking features?


Greg


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Stepping down as core

2018-10-12 Thread Dmitry Tantsur
I'm sad to hear it :( Good luck, do not disappear completely, it was a pleasure 
to work with you. See you in Berlin!


On 10/11/18 1:40 PM, Sam Betts (sambetts) wrote:
As many of you will have seen on IRC, I've mostly been appearing AFK for the 
last couple of development cycles. Due to other tasks downstream most of my 
attention has been drawn away from upstream Ironic development. Going forward 
I'm unlikely to be as heavily involved with the Ironic project as I have been in 
the past, so I am stepping down as a core contributor to make way for those more 
involved and with more time to contribute.


That said I do not intend to disappear, Myself and my colleagues plan to 
continue to support the Cisco Ironic drivers, we just won't be so heavily 
involved in core ironic development and its worth noting that although I might 
appear AFK on IRC because my main focus is on other things, I always have an ear 
to the ground and direct pings will generally reach me.


I will be in Berlin for the OpenStack summit, so to those that are attending I 
hope to see you there.


The Ironic project has been (and I hope continues to be) an awesome place to 
contribute too, thank you


Sam Betts
sambetts


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-40

2018-10-02 Thread Dmitry Tantsur

++ that was very helpful, thanks Chris!

On 10/2/18 6:18 PM, Steven Dake (stdake) wrote:

Chris,

Thanks for all the hard work you have put into this.  FWIW I found value in 
your reports, but perhaps because I am not involved in the daily activities of 
the TC.

Cheers
-steve


On 10/2/18, 8:25 AM, "Chris Dent"  wrote:

 
 HTML: https://anticdent.org/tc-report-18-40.html
 
 I'm going to take a break from writing the TC reports for a while.

 If other people (whether on the TC or not) are interested in
 producing their own form of a subjective review of the week's TC
 activity, I very much encourage you to do so. It's proven an
 effective way to help at least some people maintain engagement.
 
 I may pick it up again when I feel like I have sufficient focus and

 energy to produce something that has more value and interpretation
 than simply pointing at
 [the IRC logs](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/).
 However, at this time, I'm not producing a product that is worth the
 time it takes me to do it and the time it takes away from doing
 other things. I'd rather make more significant progress on fewer
 things.
 
 In the meantime, please join me in congratulating and welcoming the

 newly elected members of the TC: Lance Bragstad, Jean-Philippe
 Evrard, Doug Hellman, Julia Kreger, Ghanshyam Mann, and Jeremy
 Stanley.
 
 
 --

 Chris Dent   ٩◔̯◔۶   https://anticdent.org/
 freenode: cdent tw: @anticdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-02 Thread Dmitry Tantsur

On 10/2/18 6:17 PM, Mark Goddard wrote:



On Tue, 2 Oct 2018 at 17:10, Jim Rollenhagen > wrote:


On Tue, Oct 2, 2018 at 11:40 AM Eric Fried  wrote:

 > What Eric is proposing (and Julia and I seem to be in favor of), is
 > nearly the same as your proposal. The single difference is that these
 > config templates or deploy templates or whatever could *also* require
 > certain traits, and the scheduler would use that information to pick 
a
 > node. While this does put some scheduling information into the config
 > template, it also means that we can remove some of the flavor 
explosion
 > *and* mostly separate scheduling from configuration.
 >
 > So, you'd have a list of traits on a flavor:
 >
 > required=HW_CPU_X86_VMX,HW_NIC_ACCEL_IPSEC
 >
 > And you would also have a list of traits in the deploy template:
 >
 > {"traits": {"required": ["STORAGE_HARDWARE_RAID"]}, "config": }
 >
 > This allows for making flavors that are reasonably flexible (instead 
of
 > two flavors that do VMX and IPSEC acceleration, one of which does 
RAID).
 > It also allows users to specify a desired configuration without also
 > needing to know how to correctly choose a flavor that can handle that
 > configuration.
 >
 > I think it makes a lot of sense, doesn't impose more work on users, 
and
 > can reduce the number of flavors operators need to manage.
 >
 > Does that make sense?

This is in fact exactly what Jay proposed. And both Julia and I are in
favor of it as an ideal long-term solution. Where Julia and I deviated
from Jay's point of view was in our desire to use "the hack" in the
short term so we can satisfy the majority of use cases right away
without having to wait for that ideal solution to materialize.


Ah, good point, I had missed that initially. Thanks. Let's do that.

So if we all agree Jay's proposal is the right thing to do, is there any
reason to start working on a short-term hack instead of putting those
efforts into the better solution? I don't see why we couldn't get that done
in one cycle, if we're all in agreement on it.


I'm still unclear on the ironic side of this. I can see that config of some sort 
is stored in glance, and referenced upon nova server creation. Somehow this 
would be synced to ironic by the nova virt driver during node provisioning. The 
part that's missing in my mind is how to map from a config in glance to a set of 
actions performed by ironic. Does the config in glance reference a deploy 
template, or a set of ironic deploy steps? Or does ironic (or OpenStack) define 
some config schema that it supports, and use it to generate a set of deploy steps?


I think the most straightforward way is through the same deploy steps mechanism 
we planned. Make the virt driver fetch the config from glance, then pass it to 
the provisioning API. As a bonus, we'll get the same API workflow with 
standalone and nova case.





// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic] [nova] [tripleo] Deprecation of Nova's integration with Ironic Capabilities and ComputeCapabilitiesFilter

2018-10-02 Thread Dmitry Tantsur

On 10/2/18 12:36 AM, Jay Pipes wrote:

On 10/01/2018 06:04 PM, Julia Kreger wrote:

On Mon, Oct 1, 2018 at 2:41 PM Eric Fried  wrote:


 > So say the user requests a node that supports UEFI because their
    image
 > needs UEFI. Which workflow would you want here?
 >
 > 1) The operator (or ironic?) has already configured the node to
    boot in
 > UEFI mode. Only pre-configured nodes advertise the "supports
    UEFI" trait.
 >
 > 2) Any node that supports UEFI mode advertises the trait. Ironic
    ensures
 > that UEFI mode is enabled before provisioning the machine.
 >
 > I imagine doing #2 by passing the traits which were specifically
 > requested by the user, from Nova to Ironic, so that Ironic can do the
 > right thing for the user.
 >
 > Your proposal suggests that the user request the "supports UEFI"
    trait,
 > and *also* pass some glance UUID which the user understands will make
 > sure the node actually boots in UEFI mode. Something like:
 >
 > openstack server create --flavor METAL_12CPU_128G --trait
    SUPPORTS_UEFI
 > --config-data $TURN_ON_UEFI_UUID
 >
 > Note that I pass --trait because I hope that will one day be
    supported
 > and we can slow down the flavor explosion.

    IMO --trait would be making things worse (but see below). I think UEFI
    with Jay's model would be more like:

   openstack server create --flavor METAL_12CPU_128G --config-data $UEFI

    where the UEFI profile would be pretty trivial, consisting of
    placement.traits.required = ["BOOT_MODE_UEFI"] and object.boot_mode =
    "uefi".

    I agree that this seems kind of heavy, and that it would be nice to be
    able to say "boot mode is UEFI" just once. OTOH I get Jay's point that
    we need to separate the placement decision from the instance
    configuration.

    That said, what if it was:

  openstack config-profile create --name BOOT_MODE_UEFI --json -
  {
   "type": "boot_mode_scheme",
   "version": 123,
   "object": {
       "boot_mode": "uefi"
   },
   "placement": {
    "traits": {
     "required": [
      "BOOT_MODE_UEFI"
     ]
    }
   }
  }
  ^D

    And now you could in fact say

  openstack server create --flavor foo --config-profile BOOT_MODE_UEFI

    using the profile name, which happens to be the same as the trait name
    because you made it so. Does that satisfy the yen for saying it once? (I
    mean, despite the fact that you first had to say it three times to get
    it set up.)

    

    I do want to zoom out a bit and point out that we're talking about
    implementing a new framework of substantial size and impact when the
    original proposal - using the trait for both - would just work out of
    the box today with no changes in either API. Is it really worth it?


+1000. Reading both of these threads, it feels like we're basically trying to 
make something perfect. I think that is a fine goal, except it is unrealistic 
because the enemy of good is perfection.


    

    By the way, with Jim's --trait suggestion, this:

 > ...dozens of flavors that look like this:
 > - 12CPU_128G_RAID10_DRIVE_LAYOUT_X
 > - 12CPU_128G_RAID5_DRIVE_LAYOUT_X
 > - 12CPU_128G_RAID01_DRIVE_LAYOUT_X
 > - 12CPU_128G_RAID10_DRIVE_LAYOUT_Y
 > - 12CPU_128G_RAID5_DRIVE_LAYOUT_Y
 > - 12CPU_128G_RAID01_DRIVE_LAYOUT_Y

    ...could actually become:

  openstack server create --flavor 12CPU_128G --trait $WHICH_RAID
    --trait
    $WHICH_LAYOUT

    No flavor explosion.


++ I believe this was where this discussion kind of ended up in.. ?Dublin?

The desire and discussion that led us into complex configuration templates and 
profiles being submitted were for highly complex scenarios where users wanted 
to assert detailed raid configurations to disk. Naturally, there are many 
issues there. The ability to provide such detail would be awesome for those 
10% of operators that need such functionality. Of course, if that is the only 
path forward, then we delay the 90% from getting the minimum viable feature 
they need.



    (Maybe if we called it something other than --trait, like maybe
    --config-option, it would let us pretend we're not really overloading a
    trait to do config - it's just a coincidence that the config option has
    the same name as the trait it causes to be required.)


I feel like it might be confusing, but totally +1 to matching required trait 
name being a thing. That way scheduling is completely decoupled and if 
everything was correct then the request should already be scheduled properly.


I guess I'll just drop the idea of doing this properly then. It's true that the 
placement traits concept can be hacked up and the virt driver can just pass a 
list of trait strings to the Ironic API and that's the most expedient way to get 
what the 90% of people apparently want. It's also 

Re: [openstack-dev] [Openstack-sigs] Are we ready to put stable/ocata into extended maintenance mode?

2018-09-19 Thread Dmitry Tantsur

On 9/18/18 9:27 PM, Matt Riedemann wrote:
The release page says Ocata is planned to go into extended maintenance mode on 
Aug 27 [1]. There really isn't much to this except it means we don't do releases 
for Ocata anymore [2]. There is a caveat that project teams that do not wish to 
maintain stable/ocata after this point can immediately end of life the branch 
for their project [3]. We can still run CI using tags, e.g. if keystone goes 
ocata-eol, devstack on stable/ocata can still continue to install from 
stable/ocata for nova and the ocata-eol tag for keystone. Having said that, if 
there is no undue burden on the project team keeping the lights on for 
stable/ocata, I would recommend not tagging the stable/ocata branch end of life 
at this point.


So, questions that need answering are:

1. Should we cut a final release for projects with stable/ocata branches before 
going into extended maintenance mode? I tend to think "yes" to flush the queue 
of backports. In fact, [3] doesn't mention it, but the resolution said we'd tag 
the branch [4] to indicate it has entered the EM phase.


Some ironic projects have outstanding changes, I guess we should release them.



2. Are there any projects that would want to skip EM and go directly to EOL (yes 
this feels like a Monopoly question)?


[1] https://releases.openstack.org/
[2] 
https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases 

[3] 
https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance 

[4] 
https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html#end-of-life 






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Bumping eventlet to 0.24.1

2018-09-18 Thread Dmitry Tantsur

On 9/6/18 8:52 AM, Matthew Thode wrote:

On 18-08-23 09:50:13, Matthew Thode wrote:

This is your warning, if you have concerns please comment in
https://review.openstack.org/589382 .  cross tests pass, so that's a
good sign... atm this is only for stein.



I pushed the big red button.


Ironic grenade might have been broken by this change: 
https://bugs.launchpad.net/neutron/+bug/1792925. No clear evidence, but that 
seems to be the only suspect.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all]-ish : Updates required for readthedocs publishers

2018-09-05 Thread Dmitry Tantsur

On 09/05/2018 06:10 PM, Doug Hellmann wrote:

Excerpts from Ian Wienand's message of 2018-09-05 18:53:10 +1000:

Hello,

If you're interested in the projects mentioned below, you may have
noticed a new, failing, non-voting job
"your-readthedocs-job-requires-attention".  Spoiler alert: your
readthedocs job requires attention.  It's easy to miss because
publishing happens in the post pipeline and people don't often look
at the results of these jobs.

Please see the prior email on this

  http://lists.openstack.org/pipermail/openstack-dev/2018-August/132836.html


Those instructions and the ones linked at
https://docs.openstack.org/infra/openstack-zuul-jobs/project-templates.html#project_template-docs-on-readthedocs
say to "generate a web hook URL". RTD offers me 4 types of webhooks
(github, bitbucket, gitlab, generic). Which type do we need for our
CI? "generic"?


The generic one.



What is the "ID" of the webhook? The number at the end of the URL, or
the token associated with it?


The number, like here: 
https://github.com/openstack/metalsmith/blob/master/.zuul.yaml#L157




Doug



for what to do (if you read the failing job logs, it also points you
to this).

I (or #openstack-infra) can help, but only once the openstackci user
is given permissions to the RTD project by its current owner.

Thanks,

-i

The following projects have this job now:

openstack-infra/gear
openstack/airship-armada
openstack/almanach
openstack/ansible-role-bindep
openstack/ansible-role-cloud-launcher
openstack/ansible-role-diskimage-builder
openstack/ansible-role-cloud-fedmsg
openstack/ansible-role-cloud-gearman
openstack/ansible-role-jenkins-job-builder
openstack/ansible-role-logrotate
openstack/ansible-role-ngix
openstack/ansible-role-nodepool
openstack/ansible-role-openstacksdk
openstack/ansible-role-shade
openstack/ansible-role-ssh
openstack/ansible-role-sudoers
openstack/ansible-role-virtualenv
openstack/ansible-role-zookeeper
openstack/ansible-role-zuul
openstack/ara
openstack/bareon
openstack/bareon-allocator
openstack/bareon-api
openstack/bareon-ironic
openstack/browbeat
openstack/downpour
openstack/fuel-ccp
openstack/fuel-ccp-installer
openstack/fuel-noop-fixtures
openstack/ironic-staging-drivers
openstack/k8s-docker-suite-app-murano
openstack/kloudbuster
openstack/nerd-reviewer
openstack/networking-dpm
openstack/nova-dpm
openstack/ooi
openstack/os-faults
openstack/packetary
openstack/packetary-specs
openstack/performa
openstack/poppy
openstack/python-almanachclient
openstack/python-jenkins
openstack/rally
openstack/solar
openstack/sqlalchemy-migrate
openstack/stackalytics
openstack/surveil
openstack/swauth
openstack/turbo-hipster
openstack/virtualpdu
openstack/vmtp
openstack/windmill
openstack/yaql



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all]-ish : Updates required for readthedocs publishers

2018-09-05 Thread Dmitry Tantsur

On 09/05/2018 10:53 AM, Ian Wienand wrote:

Hello,

If you're interested in the projects mentioned below, you may have
noticed a new, failing, non-voting job
"your-readthedocs-job-requires-attention".  Spoiler alert: your
readthedocs job requires attention.  It's easy to miss because
publishing happens in the post pipeline and people don't often look
at the results of these jobs.

Please see the prior email on this

  http://lists.openstack.org/pipermail/openstack-dev/2018-August/132836.html

for what to do (if you read the failing job logs, it also points you
to this).

I (or #openstack-infra) can help, but only once the openstackci user
is given permissions to the RTD project by its current owner.

Thanks,

-i

The following projects have this job now:

openstack-infra/gear
openstack/airship-armada
openstack/almanach
openstack/ansible-role-bindep
openstack/ansible-role-cloud-launcher
openstack/ansible-role-diskimage-builder
openstack/ansible-role-cloud-fedmsg
openstack/ansible-role-cloud-gearman
openstack/ansible-role-jenkins-job-builder
openstack/ansible-role-logrotate
openstack/ansible-role-ngix
openstack/ansible-role-nodepool
openstack/ansible-role-openstacksdk
openstack/ansible-role-shade
openstack/ansible-role-ssh
openstack/ansible-role-sudoers
openstack/ansible-role-virtualenv
openstack/ansible-role-zookeeper
openstack/ansible-role-zuul
openstack/ara
openstack/bareon
openstack/bareon-allocator
openstack/bareon-api
openstack/bareon-ironic
openstack/browbeat
openstack/downpour
openstack/fuel-ccp
openstack/fuel-ccp-installer
openstack/fuel-noop-fixtures
openstack/ironic-staging-drivers


I'm pretty sure we updated this in Rocky. Is it here because of stable branches? 
I can propose a backport if so.



openstack/k8s-docker-suite-app-murano
openstack/kloudbuster
openstack/nerd-reviewer
openstack/networking-dpm
openstack/nova-dpm
openstack/ooi
openstack/os-faults
openstack/packetary
openstack/packetary-specs
openstack/performa
openstack/poppy
openstack/python-almanachclient
openstack/python-jenkins
openstack/rally
openstack/solar
openstack/sqlalchemy-migrate
openstack/stackalytics
openstack/surveil
openstack/swauth
openstack/turbo-hipster
openstack/virtualpdu
openstack/vmtp
openstack/windmill
openstack/yaql

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Open API 3.0 for OpenStack API

2018-09-04 Thread Dmitry Tantsur

Hi,

On 08/29/2018 08:36 AM, Edison Xiang wrote:

Hi team,

As we know, Open API 3.0 was released on July, 2017, it is about one year ago.
Open API 3.0 support some new features like anyof, oneof and allof than Open API 
2.0(Swagger 2.0).

Now OpenStack projects do not support Open API.
Also I found some old emails in the Mail List about supporting Open API 2.0 in 
OpenStack.


Some limitations are mentioned in the Mail List for OpenStack API:
1. The POST */action APIs.
     These APIs are exist in lots of projects like nova, cinder.
     These APIs have the same URI but the responses will be different when the 
request is different.

2. Micro versions.
     These are controller via headers, which are sometimes used to describe 
behavioral changes in an API, not just request/response schema changes.


About the first limitation, we can find the solution in the Open API 3.0.
The example [2] shows that we can define different request/response in the same 
URI by anyof feature in Open API 3.0.


This is a good first step, but if I get it right it does not specify which 
response corresponds to which request.




About the micro versions problem, I think it is not a limitation related a 
special API Standard.

We can list all micro versions API schema files in one directory like nova/V2,


I don't think this approach will scale if you plan to generate anything based on 
these schemes. If you generate client code from separate schema files, you'll 
essentially end up with dozens of major versions.



or we can list the schema changes between micro versions as tempest project did 
[3].


++



Based on Open API 3.0, it can bring lots of benefits for OpenStack Community and 
does not impact the current features the Community has.
For example, we can automatically generate API documents, different language 
Clients(SDK) maybe for different micro versions,


From my experience with writing an SDK, I don't believe generating a complete 
SDK from API schemes is useful. Maybe generating low-level protocol code to base 
an SDK on, but even that may be easier to do by hand.


Dmitry

and generate cloud tool adapters for OpenStack, like ansible module, terraform 
providers and so on.
Also we can make an API UI to provide an online and visible API search, API 
Calling for every OpenStack API.

3rd party developers can also do some self-defined development.

[1] https://github.com/OAI/OpenAPI-Specification
[2] 
https://github.com/edisonxiang/OpenAPI-Specification/blob/master/examples/v3.0/petstore.yaml#L94-L109
[3] 
https://github.com/openstack/tempest/tree/master/tempest/lib/api_schema/response/compute


Best Regards,
Edison Xiang



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [ironic][tripleo][edge] Discussing ironic federation and distributed deployments

2018-08-31 Thread Dmitry Tantsur

On 08/30/2018 07:29 PM, Emilien Macchi wrote:



On Thu, Aug 30, 2018 at 1:21 PM Julia Kreger > wrote:


Greetings everyone,

It looks like the most agreeable time on the doodle[1] seems to be
Tuesday September 4th at 13:00 UTC. Are there any objections to using
this time?

If not, I'll go ahead and create an etherpad, and setup a bluejeans
call for that time to enable high bandwidth discussion.


TripleO sessions start on Wednesday, so +1 from us (unless I missed something).


This is about a call a week before the PTG, not the PTG itself. You're still 
very welcome to join!


Dmitry


--
Emilien Macchi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO]Addressing Edge/Multi-site/Multi-cloud deployment use cases (new squad)

2018-08-28 Thread Dmitry Tantsur

On 08/22/2018 03:26 PM, Jiri Tomasek wrote:

Hi,





- Plan and template management in git.

   This could be an iterative step towards eliminating Swift in the 
undercloud.
   Swift seemed like a natural choice at the time because it was an existing
   OpenStack service.  However, I think git would do a better job at 
tracking
   history and comparing changes and is much more lightweight than Swift. 
We've
   been managing the config-download directory as a git repo, and I like 
this
   direction. For now, we are just putting the whole git repo in Swift, but 
I
   wonder if it makes sense to consider eliminating Swift entirely. We need 
to
   consider the scale of managing thousands of plans for separate edge
   deployments.

   I also think this would be a step towards undercloud simplification.


+1, we need to identify how much this affects the existing API and overall user 
experience

for managing deployment plans. Currentl plan management options we support are:
- create plan from default files (/usr/share/tht...)
- create/update plan from local directory
- create/update plan by providing tarball
- create/update plan from remote git repository

Ian has been working on similar efforts towards performance improvements [2], It
would be good to take this a step further and evaluate possibility to eliminate 
Swift entirely.


[2] https://review.openstack.org/#/c/581153/


We need to do something about ironic-inspector then: it currently depends on 
swift for storing collected data. Fortunately, there is a spec to fix it, but it 
hasn't been our team's priority. Reviews are welcome: 
https://review.openstack.org/#/c/587698/


Dmitry



-- Jirka








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] proposing metalsmith for inclusion into ironic governance

2018-08-27 Thread Dmitry Tantsur

Hi all,

I would like propose the metalsmith library [1][2] for inclusion into the bare 
metal project governance.


What it is and is not
-

Metalsmith is a library and CLI tool for using Ironic+Neutron for provisioning 
bare metal nodes. It can be seen as a lightweight replacement of Nova when Nova 
is too much. The primary use case is single-tenant standalone installer.


Metalsmith is not a new service, it does not maintain any state, except for 
state maintained by Ironic and Neutron. Metalsmith is not and will not be a 
replacement for Nova in any proper cloud scenario.


Metalsmith does have some overlap with Bifrost, with one important feature 
difference: its primary feature is a mini-scheduler that allows to pick a 
suitable bare metal node for deployment.


I have a partial convergence plan as well! First, as part of this effort I'm 
working on missing features in openstacksdk, which is used in the OpenStack 
ansible modules, which are used in Bifrost. Second, I hope we can use it as a 
helper for making Bifrost do scheduling decisions.


Background
--

Metalsmith was born with the goal of replacing Nova in TripleO undercloud. 
Indeed, the undercloud uses only a small subset of Nova features, while having 
features that conflict with Nova's design (for example, bypassing the scheduler 
[3]).


We wanted to avoid putting a lot of provisioning logic into existing TripleO 
components. So I wrote a library that does not carry any TripleO-specific 
assumptions, but does allow to address its needs.


Why under Ironic


I believe the goal of Metalsmith is fully aligned with what the Ironic team is 
doing around standalone deployment. I think Metalsmith can provide a nice entry 
point into standalone deployments for people who (for any reasons) will not use 
Bifrost. With this change I hope to get more exposure for it.


The library itself is small, documented [2], follows OpenStack practices and 
does not have particular operating requirements. There is nothing in it that is 
not familiar to the Ironic team members.


Please let me know if you have any questions or concerns.

Dmitry


[1] https://github.com/openstack/metalsmith
[2] https://metalsmith.readthedocs.io/en/latest/
[3] http://tripleo.org/install/advanced_deployment/node_placement.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO]Addressing Edge/Multi-site/Multi-cloud deployment use cases (new squad)

2018-08-27 Thread Dmitry Tantsur

Hi,

Some additions inline.

On 08/20/2018 10:47 PM, James Slagle wrote:

As we start looking at how TripleO will address next generation deployment
needs such as Edge, multi-site, and multi-cloud, I'd like to kick off a
discussion around how TripleO can evolve and adapt to meet these new
challenges.

What are these challenges? I think the OpenStack Edge Whitepaper does a good
job summarizing some of them:

https://www.openstack.org/assets/edge/OpenStack-EdgeWhitepaper-v3-online.pdf

They include:

- management of distributed infrastructure
- massive scale (thousands instead of hundreds)
- limited network connectivity
- isolation of distributed sites
- orchestration of federated services across multiple sites

We already have a lot of ongoing work that directly or indirectly starts to
address some of these challenges. That work includes things like
config-download, split-controlplane, metalsmith integration, validations,
all-in-one, and standalone.

I laid out some initial ideas in a previous message:

http://lists.openstack.org/pipermail/openstack-dev/2018-July/132398.html

I'll be reviewing some of that here and going into a bit more detail.

These are some of the high level ideas I'd like to see TripleO start to
address:

- More separation between planning and deploying (likely to be further defined
   in spec discussion). We've had these concepts for a while, but we need to do
   a better job of surfacing them to users as deployments grow in size and
   complexity.

   With config-download, we can more easily separate the phases of rendering,
   downloading, validating, and applying the configuration. As we increase in
   scale to managing many deployments, we should take advantage of what each of
   those phases offer.

   The separation also makes the deployment more portable, as we should
   eliminate any restrictions that force the undercloud to be the control node
   applying the configuration.

- Management of multiple deployments from a single undercloud. This is of
   course already possible today, but we need better docs and polish and more
   testing to flush out any bugs.

- Plan and template management in git.

   This could be an iterative step towards eliminating Swift in the undercloud.
   Swift seemed like a natural choice at the time because it was an existing
   OpenStack service.  However, I think git would do a better job at tracking
   history and comparing changes and is much more lightweight than Swift. We've
   been managing the config-download directory as a git repo, and I like this
   direction. For now, we are just putting the whole git repo in Swift, but I
   wonder if it makes sense to consider eliminating Swift entirely. We need to
   consider the scale of managing thousands of plans for separate edge
   deployments.

   I also think this would be a step towards undercloud simplification.

- Orchestration between plans. I think there's general agreement around scaling
   up the undercloud to be more effective at managing and deploying multiple
   plans.

   The plans could be different OpenStack deployments potentially sharing some
   resources. Or, they could be deployments of different software stacks
   (Kubernetes/OpenShift, Ceph, etc).

   We'll need to develop some common interfaces for some basic orchestration
   between plans. It could include dependencies, ordering, and sharing parameter
   data (such as passwords or connection info). There is already some ongoing
   discussion about some of this work:

   http://lists.openstack.org/pipermail/openstack-dev/2018-August/133247.html

   I would suspect this would start out as collecting specific use cases, and
   then figuring out the right generic interfaces.

- Multiple deployments of a single plan. This could be useful for doing many
   deployments that are all the same. Of course some info might be different
   such as network IP's, hostnames, and node specific details. We could have
   some generic input interfaces for those sorts of things without having to
   create new Heat stacks, which would allow re-using the same plan/stack for
   multiple deployments. When scaling to hundreds/thousands of edge deployments
   this could be really effective at side-stepping managing hundreds/thousands
   of Heat stacks.

   We may also need further separation between a plan and it's deployment state
   to have this modularity.

- Distributed management/application of configuration. Even though the
   configuration is portable (config-download), we may still want some
   automation around applying the deployment when not using the undercloud as a
   control node. I think things like ansible-runner or Ansible AWX could help
   here, or perhaps mistral-executor agents, or "mistral as a library". This
   would also make our workflows more portable.

- New documentation highlighting some or all of the above features and how to
   take advantage of it for new use cases (thousands of edge deployments, etc).
   I see this as a 

Re: [openstack-dev] [ironic][bifrost][sushy][ironic-inspector][ironic-ui][virtualbmc] sub-project/repository core reviewer changes

2018-08-25 Thread Dmitry Tantsur
+1 to all

On Thu, Aug 23, 2018, 20:25 Julia Kreger 
wrote:

> Greetings everyone!
>
> In our team meeting this week we stumbled across the subject of
> promoting contributors to be sub-project's core reviewers.
> Traditionally it is something we've only addressed as needed or
> desired by consensus with-in those sub-projects, but we were past due
> time to take a look at the entire picture since not everything should
> fall to ironic-core.
>
> And so, I've taken a look at our various repositories and I'm
> proposing the following additions:
>
> For sushy-core, sushy-tools-core, and virtualbmc-core: Ilya
> Etingof[1]. Ilya has been actively involved with sushy, sushy-tools,
> and virtualbmc this past cycle. I've found many of his reviews and
> non-voting review comments insightful and willing to understand. He
> has taken on some of the effort that is needed to maintain and keep
> these tools usable for the community, and as such adding him to the
> core group for these repositories makes lots of sense.
>
> For ironic-inspector-core and ironic-specs-core: Kaifeng Wang[2].
> Kaifeng has taken on some hard problems in ironic and
> ironic-inspector, as well as brought up insightful feedback in
> ironic-specs. They are demonstrating a solid understanding that I only
> see growing as time goes on.
>
> For sushy-core: Debayan Ray[3]. Debayan has been involved with the
> community for some time and has worked on sushy from early on in its
> life. He has indicated it is near and dear to him, and he has been
> actively reviewing and engaging in discussion on patchsets as his time
> has permitted.
>
> With any addition it is good to look at inactivity as well. It saddens
> me to say that we've had some contributors move on as priorities have
> shifted to where they are no longer involved with the ironic
> community. Each person listed below has been inactive for a year or
> more and is no longer active in the ironic community. As such I've
> removed their group membership from the sub-project core reviewer
> groups. Should they return, we will welcome them back to the community
> with open arms.
>
> bifrost-core: Stephanie Miller[4]
> ironic-inspector-core: Anton Arefivev[5]
> ironic-ui-core: Peter Peila[6], Beth Elwell[7]
>
> Thanks,
>
> -Julia
>
> [1]: http://stackalytics.com/?user_id=etingof=marks
> [2]: http://stackalytics.com/?user_id=kaifeng=marks
> [3]: http://stackalytics.com/?user_id=deray=marks=all
> [4]: http://stackalytics.com/?metric=marks=all_id=stephan
> [5]: http://stackalytics.com/?user_id=aarefiev=marks
> [6]: http://stackalytics.com/?metric=marks=all_id=ppiela
> [7]:
> http://stackalytics.com/?metric=marks=all_id=bethelwell=ironic-ui
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-23 Thread Dmitry Tantsur

On 08/17/2018 07:45 AM, Cédric Jeanneret wrote:



On 08/17/2018 12:25 AM, Steve Baker wrote:



On 15/08/18 21:32, Cédric Jeanneret wrote:

Dear Community,

As you may know, a move toward Podman as replacement of Docker is starting.

One of the issues with podman is the lack of daemon, precisely the lack
of a socket allowing to send commands and get a "computer formatted
output" (like JSON or YAML or...).

In order to work that out, Podman has added support for varlink¹, using
the "socket activation" feature in Systemd.

On my side, I would like to push forward the integration of varlink in
TripleO deployed containers, especially since it will allow the following:
# proper interface with Paunch (via python link)

I'm not sure this would be desirable. If we're going to all container
management via a socket I think we'd be better supported by using CRI-O.
One of the advantages I see of podman is being able to manage services
with systemd again.


Using the socket wouldn't prevent a "per service" systemd unit. Varlink
would just provide another way to manage the containers.
It's NOT like the docker daemon - it will not manage the containers on
startup for example. It's just an API endpoint, without any "automated
powers".

See it as an interesting complement to the CLI, allowing to access
containers data easily with a computer-oriented language like python3.


# a way to manage containers from within specific containers (think
"healthcheck", "monitoring") by mounting the socket as a shared volume

# a way to get container statistics (think "metrics")

# a way, if needed, to get an ansible module being able to talk to
podman (JSON is always better than plain text)

# a way to secure the accesses to Podman management (we have to define
how varlink talks to Podman, maybe providing dedicated socket with
dedicated rights so that we can have dedicated users for specific tasks)

Some of these cases might prove to be useful, but I do wonder if just
making podman calls would be just as simple without the complexity of
having another host-level service to manage. We can still do podman
operations inside containers by bind-mounting in the container state.


I wouldn't mount the container state as-is for mainly security reasons.
I'd rather get the varlink abstraction rather than the plain `podman'
CLI - in addition, it is far, far easier for applications to get a
proper JSON instead of some random plain text - even if `podman' seems
to get a "--format" option. I really dislike calling "subprocess" things
when there is a nice API interface - maybe that's just me ;).

In addition, apparently the state is managed by some sqlite DB -
concurrent accesses to that DB isn't really a good idea, we really don't
want a corruption, do we?


IIRC sqlite handles concurrent accesses, it just does them slowly.






That said, I have some questions:
° Does any of you have some experience with varlink and podman interface?
° What do you think about that integration wish?
° Does any of you have concern with this possible addition?

I do worry a bit that it is advocating for a solution before we really
understand the problems. The biggest unknown for me is what we do about
healthchecks. Maybe varlink is part of the solution here, or maybe its a
systemd timer which executes the healthcheck and restarts the service
when required.


Maybe. My main concern is: would it be interesting to compare both
solutions?
The Healthchecks are clearly docker-specific, no interface exists atm in
the libpod for that. So we have to mimic it in the best way.
Maybe the healthchecks place is in systemd, and varlink would be used
only for external monitoring and metrics. That would also be a nice way
to explore.

I would not focus on only one of the possibilities I've listed. There
are probably even more possibilities I didn't see - once we get a proper
socket, anything is possible, the good and the bad ;).


Thank you for your feedback and ideas.

Have a great day (or evening, or whatever suits the time you're reading
this ;))!

C.


¹ https://www.projectatomic.io/blog/2018/05/podman-varlink/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





Re: [openstack-dev] [mistral] No Denver PTG Sessions

2018-08-21 Thread Dmitry Tantsur

On 08/21/2018 10:22 AM, Dougal Matthews wrote:

Hi all,

Unfortunately due to some personal conflicts and trouble with travel plans, 
there will be no Mistral cores at the Denver PTG. This means that we have had to 
cancel the Mistral sessions. I recently asked if anyone was planning to attend 
and only got one maybe.


I am considering trying to arrange a "virtual PTG", so we can do some planning 
for Stein. However, I'm not sure how/if that could work. Do you think this would 
be a good idea? Suggestions how to organise one would be very welcome!


We did a few virtual midcycles for ironic, and it ended up quite well. While it 
did require some people to stay awake at unusual times, it did allow people 
without travel budget to attend.


Initially we used the OpenStack SIP system, but we found Bluejeans to be a bit 
easier to use. I think it has a limit of 300 participants, which is more than 
enough. Anyone from Red Hat can host it.


We dedicated 1-2 days with 4-5 hours each. I'd recommend against taking up the 
whole day - will be too exhausting. The first time we tried splitting the slots 
into two per day: APAC friendly and EMEA friendly. Relatively few people showed 
up at the former, so the next time we only had one slot.


As with the PTG, having an agenda upfront helps a lot. We did synchronization 
and notes through an etherpad - exactly the same was as on the PTG.


Hope that helps,
Dmitry



Thanks,
Dougal


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] ironic-staging-drivers: what to do?

2018-08-14 Thread Dmitry Tantsur

Inline

On 08/13/2018 08:40 PM, Julia Kreger wrote:

Greetings fellow ironicans!

As many of you might know an openstack/ironic-staging-drivers[1]
repository exists. What most might not know is that it was
intentionally created outside of ironic's governance[2].

At the time it was created ironic was moving towards removing drivers
that did not meet our third-party CI requirement[3] to be in-tree. The
repository was an attempt to give a home to what some might find
useful or where third party CI is impractical or cost-prohibitive and
thus could not be officially part of Ironic the service. There was
hope that drivers could land in ironic-staging-drivers and possibly
graduate to being moved in-tree with third-party CI. As our community
has evolved we've not stopped and revisited the questions.

With our most recent release over, I believe we need to ask ourselves
if we should consider moving ironic-staging-drivers into our
governance.


Not voting on this, since I'm obviously biased. Consider me +0 :)



Over the last couple of releases several contributors have found
themselves trying to seek out two available reviewers to merge even
trivial fixes[4]. Due to the team being so small this was no easy
task. As a result, I'm wondering why not move the repository into
governance, grant ironic-core review privileges upon the repository,
and maintain the purpose and meaning of the repository. This would
also result in the repository's release becoming managed via the
release management process which is a plus.


Strictly speaking, nothing prevents us from granting ironic-core +2 on it right 
now. It's a different question whether they'll actually review it. We need a 
commitment from >50% cores to review it more or less regularly, otherwise we'll 
end up in the same situation.




We could then propose an actual graduation process and help alleviate
some of the issues where driver code is iterated upon for long periods
of time before landing. At the same time I can see at least one issue
which is if we were to do that, then we would also need to manage
removal through the same path.


I don't think we really "need to", but we certainly can. Now that I think that 
we could use ironic-staging-drivers as an *actual* staging area for new drivers, 
I'm starting to lean towards +1 on this whole idea.


This still leaves some drivers that will never get a CI.



I know there are concerns over responsibility in terms of code
ownership and quality, but I feel like we already hit such issues[5],
like those encountered when Dmitry removed classic drivers[6] from the
repository and also encountered issues just prior to the latest
release[7][8].


Well, yes, I personally have to care about this repository anyway.

Dmitry



This topic has come up in passing at PTGs and most recently on IRC[9],
and I think we ought to discuss it during our next weekly meeting[10].
I've gone ahead and added an item to the agenda, but we can also
discuss via email.

-Julia

[1]: 
http://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/projects.yaml#n4571
[2]: 
http://git.openstack.org/cgit/openstack/ironic-staging-drivers/tree/README.rst#n16
[3]: 
https://specs.openstack.org/openstack/ironic-specs/specs/approved/third-party-ci.html
[4]: https://review.openstack.org/#/c/548943/
[5]: https://review.openstack.org/#/c/541916/
[6]: https://review.openstack.org/567902
[7]: https://review.openstack.org/590352
[8]: https://review.openstack.org/590401
[9]: 
http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2018-08-09.log.html#t2018-08-09T11:55:27
[10]: https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [releases][rocky][tempest-plugins][ptl] Reminder to tag the Tempest plugins for Rocky release

2018-08-13 Thread Dmitry Tantsur

On 08/13/2018 03:46 PM, Doug Hellmann wrote:

Excerpts from Dmitry Tantsur's message of 2018-08-13 15:35:23 +0200:

Hi,

The plugins are branchless and should stay so. Let us not dive into this madness
again please.


You are correct that we do not want to branch, because we want the
same tests running against all branches of services in our CI system
to help us avoid (or at least recognize) API-breaking changes across
release boundaries.


Okay, thank you for clarification. I stand corrected and apologize if my 
frustration was expressed too loudly or harshly :)




We *do* need to tag so that people consuming the plugins to certify
their clouds know which version of the plugin works with the version
of the software they are installing. Newer versions of plugins may
rely on features or changes in newer versions of tempest, or other
dependencies, that are not available in an environment that is
running an older cloud.


++



We will apply those tags in the series-specific deliverable files in
openstack/releases so that the version numbers appear together on
releases.openstack.org on the relevant release page so that users
looking for the "rocky" version of a plugin can find it easily.


Okay, this makes sense now.



Doug



Dmitry

On 08/12/2018 10:41 AM, Ghanshyam Mann wrote:

Hi All,

Rocky release is few weeks away and we all agreed to release Tempest plugin 
with cycle-with-intermediary. Detail discussion are in ML [1] in case you 
missed.

This is reminder to tag your project tempest plugins for Rocky release. You 
should be able to find your plugins deliverable file under rocky folder in 
releases repo[3].  You can refer cinder-tempest-plugin release as example.

Feel free to reach to release/QA team for any help/query.


Please make up your mind. Please. Please. Please.



[1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131810.html
[2] https://review.openstack.org/#/c/590025/
[3] https://github.com/openstack/releases/tree/master/deliverables/rocky

-gmann



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [releases][rocky][tempest-plugins][ptl] Reminder to tag the Tempest plugins for Rocky release

2018-08-13 Thread Dmitry Tantsur

Hi,

The plugins are branchless and should stay so. Let us not dive into this madness 
again please.


Dmitry

On 08/12/2018 10:41 AM, Ghanshyam Mann wrote:

Hi All,

Rocky release is few weeks away and we all agreed to release Tempest plugin 
with cycle-with-intermediary. Detail discussion are in ML [1] in case you 
missed.

This is reminder to tag your project tempest plugins for Rocky release. You 
should be able to find your plugins deliverable file under rocky folder in 
releases repo[3].  You can refer cinder-tempest-plugin release as example.

Feel free to reach to release/QA team for any help/query.


Please make up your mind. Please. Please. Please.



[1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131810.html
[2] https://review.openstack.org/#/c/590025/
[3] https://github.com/openstack/releases/tree/master/deliverables/rocky

-gmann



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-3, August 6-10

2018-08-06 Thread Dmitry Tantsur

On 08/03/2018 06:40 PM, Sean McGinnis wrote:

On Fri, Aug 03, 2018 at 11:23:56AM -0500, Sean McGinnis wrote:

-



More information on deadlines since we appear to have some conflicting
information documented. According to the published release schedule:

https://releases.openstack.org/rocky/schedule.html#r-finalrc

we stated intermediary releases had to be done by the final RC date. So based
on that, cycle-with-intermediary projects have until August 20 to do their
final release.


Another hint though: if your project uses grenade, you probably want to have 
stable/rocky at the same time as everyone else.




Of course, doing before that deadline is highly encouraged to make sure there
are not any last minute problems to work through, if at all possible.



Upcoming Deadlines & Dates
--

RC1 deadline: August 9

cycle-with-intermediary deadline: August 20





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Ongoing spam in Freenode IRC channels

2018-08-01 Thread Dmitry Tantsur

On 08/01/2018 02:21 PM, Andrey Kurilin wrote:



ср, 1 авг. 2018 г. в 14:11, Dmitry Tantsur <mailto:dtant...@redhat.com>>:


On 08/01/2018 12:49 PM, Andrey Kurilin wrote:
 > Hey Ian and stackers!
 >
 > ср, 1 авг. 2018 г. в 8:45, Ian Wienand mailto:iwien...@redhat.com>
 > <mailto:iwien...@redhat.com <mailto:iwien...@redhat.com>>>:
 >
 >     Hello,
 >
 >     It seems freenode is currently receiving a lot of unsolicited traffic
 >     across all channels.  The freenode team are aware [1] and doing their
 >     best.
 >
 >     There are not really a lot of options.  We can set "+r" on channels
 >     which means only nickserv registered users can join channels.  We 
have
 >     traditionally avoided this, because it is yet one more barrier to
 >     communication when many are already unfamiliar with IRC access.
 >     However, having channels filled with irrelevant messages is also not
 >     very accessible.
 >
 >     This is temporarily enabled in #openstack-infra for the time being, 
so
 >     we can co-ordinate without interruption.
 >
 >     Thankfully AFAIK we have not needed an abuse policy on this before;
 >     but I guess we are the point we need some sort of coordinated
 >     response.
 >
 >     I'd suggest to start, people with an interest in a channel can 
request
 >     +r from an IRC admin in #openstack-infra and we track it at [2]
 >
 >     Longer term ... suggestions welcome? :)
 >
 >
 > Move to Slack? We can provide auto-sending to emails invitations for
joining by
 > clicking the button on some page at openstack.org <http://openstack.org>
<http://openstack.org>. It
 > will not add more berrier for new contributors and, at the same time,
this way
 > will give some base filtering by emails at least.

A few potential barriers with slack or similar solutions: lack of FOSS 
desktop
clients (correct me if I'm wrong), 



The second link from google search gives an opensource client written in python 
https://github.com/raelgc/scudcloud . Also, there is something which is written 
in golang.


The bad thing about non-official clients is that they come and go. An even worse 
thing is that Slack can (in theory) prevent them from operating or make it 
illegal (remember ICQ's attempts to ban unofficial clients?).


And I agree with Doug that non-free server part can be an issue as well. As the 
very least, we end being locked into their service.




complete lack of any console clients (ditto),


Again, google gives several ones as first results - 
https://github.com/evanyeung/terminal-slack 
https://github.com/erroneousboat/slack-term


Okay, I stand corrected here.



serious limits on free (as in beer) tariff plans.


I can make an assumption that for marketing reasons, Slack Inc can propose 
extended Free plan.


Are there precedents of them doing such a thing? Otherwise I would not count on 
it. Especially if they don't commit to providing it for free forever.


But anyway, even with default one the only thing which can limit us is `10,000 
searchable messages` which is bigger than 0 (freenode doesn't store messages).


Well, my IRC bouncer has messages for years :) I understand it's not a 
comparable solution, but I do have a way to find a message that happened a year 
ago. Not with slack.




Why I like slack? because a lot of people are familar with it (a lot of 
companies use it as like some opensource communities, like k8s )


PS: I realize that OpenStack Community will never go away from Freenode and IRC, 
but I do not want to stay silent.


I'd not mind at all to move to a more modern *FOSS* system. If we consider 
paying for Slack, we can consider hosting Matrix/Rocket/whatever as well.


Dmitry



 >
 >     -i
 >
 >     [1] https://freenode.net/news/spambot-attack
 >     [2] https://etherpad.openstack.org/p/freenode-plus-r-08-2018
 >
 >   
  __

 >     OpenStack Development Mailing List (not for usage questions)
 >     Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
 >     
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >
 >
 >
 > --
 > Best regards,
 > Andrey Kurilin.
 >
 >
 > 
__
 > OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [all] Ongoing spam in Freenode IRC channels

2018-08-01 Thread Dmitry Tantsur

On 08/01/2018 12:49 PM, Andrey Kurilin wrote:

Hey Ian and stackers!

ср, 1 авг. 2018 г. в 8:45, Ian Wienand >:


Hello,

It seems freenode is currently receiving a lot of unsolicited traffic
across all channels.  The freenode team are aware [1] and doing their
best.

There are not really a lot of options.  We can set "+r" on channels
which means only nickserv registered users can join channels.  We have
traditionally avoided this, because it is yet one more barrier to
communication when many are already unfamiliar with IRC access.
However, having channels filled with irrelevant messages is also not
very accessible.

This is temporarily enabled in #openstack-infra for the time being, so
we can co-ordinate without interruption.

Thankfully AFAIK we have not needed an abuse policy on this before;
but I guess we are the point we need some sort of coordinated
response.

I'd suggest to start, people with an interest in a channel can request
+r from an IRC admin in #openstack-infra and we track it at [2]

Longer term ... suggestions welcome? :)


Move to Slack? We can provide auto-sending to emails invitations for joining by 
clicking the button on some page at openstack.org . It 
will not add more berrier for new contributors and, at the same time, this way 
will give some base filtering by emails at least.


A few potential barriers with slack or similar solutions: lack of FOSS desktop 
clients (correct me if I'm wrong), complete lack of any console clients (ditto), 
serious limits on free (as in beer) tariff plans.




-i

[1] https://freenode.net/news/spambot-attack
[2] https://etherpad.openstack.org/p/freenode-plus-r-08-2018

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best regards,
Andrey Kurilin.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Ongoing spam in Freenode IRC channels

2018-08-01 Thread Dmitry Tantsur
Is it possible to ignore message or kick users by keywords? It seems that most 
messages are more or less the same and include a few URLs that are unlikely to 
appear in a normal conversation.


On 08/01/2018 07:45 AM, Ian Wienand wrote:

Hello,

It seems freenode is currently receiving a lot of unsolicited traffic
across all channels.  The freenode team are aware [1] and doing their
best.

There are not really a lot of options.  We can set "+r" on channels
which means only nickserv registered users can join channels.  We have
traditionally avoided this, because it is yet one more barrier to
communication when many are already unfamiliar with IRC access.
However, having channels filled with irrelevant messages is also not
very accessible.

This is temporarily enabled in #openstack-infra for the time being, so
we can co-ordinate without interruption.

Thankfully AFAIK we have not needed an abuse policy on this before;
but I guess we are the point we need some sort of coordinated
response.

I'd suggest to start, people with an interest in a channel can request
+r from an IRC admin in #openstack-infra and we track it at [2]

Longer term ... suggestions welcome? :)

-i

[1] https://freenode.net/news/spambot-attack
[2] https://etherpad.openstack.org/p/freenode-plus-r-08-2018

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-05 Thread Dmitry Tantsur
On Thu, Jul 5, 2018, 19:31 Fox, Kevin M  wrote:

> We're pretty far into a tangent...
>
> /me shrugs. I've done it. It can work.
>
> Some things your right. deploying k8s is more work then deploying ansible.
> But what I said depends on context. If your goal is to deploy k8s/manage
> k8s then having to learn how to use k8s is not a big ask. adding a
> different tool such as ansible is an extra cognitive dependency. Deploying
> k8s doesn't need a general solution to deploying generic base OS's. Just
> enough OS to deploy K8s and then deploy everything on top in containers.
> Deploying a seed k8s with minikube is pretty trivial. I'm not suggesting a
> solution here to provide generic provisioning to every use case in the
> datacenter. But enough to get a k8s based cluster up and self hosted enough
> where you could launch other provisioning/management tools in that same
> cluster, if you need that. It provides a solid base for the datacenter on
> which you can easily add the services you need for dealing with everything.
>
> All of the microservices I mentioned can be wrapped up in a single helm
> chart and deployed with a single helm install command.
>
> I don't have permission to release anything at the moment, so I can't
> prove anything right now. So, take my advice with a grain of salt. :)
>
> Switching gears, you said why would users use lfs when they can use a
> distro, so why use openstack without a distro. I'd say, today unless you
> are paying a lot, there isn't really an equivalent distro that isn't almost
> as much effort as lfs when you consider day2 ops. To compare with Redhat
> again, we have a RHEL (redhat openstack), and Rawhide (devstack) but no
> equivalent of CentOS. Though I think TripleO has been making progress on
> this front...
>

It's RDO what you're looking for (equivalent of centos). TripleO is an
installer project, not a distribution.


> Anyway. This thread is I think 2 tangents away from the original topic
> now. If folks are interested in continuing this discussion, lets open a new
> thread.
>
> Thanks,
> Kevin
>
> 
> From: Dmitry Tantsur [dtant...@redhat.com]
> Sent: Wednesday, July 04, 2018 4:24 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26
>
> Tried hard to avoid this thread, but this message is so much wrong..
>
> On 07/03/2018 09:48 PM, Fox, Kevin M wrote:
> > I don't dispute trivial, but a self hosting k8s on bare metal is not
> incredibly hard. In fact, it is easier then you might think. k8s is a
> platform for deploying/managing services. Guess what you need to provision
> bare metal? Just a few microservices. A dhcp service. dhcpd in a daemonset
> works well. some pxe infrastructure. pixiecore with a simple http backend
> works pretty well in practice. a service to provide installation
> instructions. nginx server handing out kickstart files for example. and a
> place to fetch rpms from in case you don't have internet access or want to
> ensure uniformity. nginx server with a mirror yum repo. Its even possible
> to seed it on minikube and sluff it off to its own cluster.
> >
> > The main hard part about it is currently no one is shipping a reference
> implementation of the above. That may change...
> >
> > It is certainly much much easier then deploying enough OpenStack to get
> a self hosting ironic working.
>
> Side note: no, it's not. What you describe is similarly hard to installing
> standalone ironic from scratch and much harder than using bifrost for
> everything. Especially when you try to do it in production. Especially with
> unusual operating requirements ("no TFTP servers on my network").
>
> Also, sorry, I cannot resist:
> "Guess what you need to orchestrate containers? Just a few things. A
> container
> runtime. Docker works well. some remove execution tooling. ansible works
> pretty
> well in practice. It is certainly much much easier then deploying enough
> k8s to
> get a self hosting containers orchestration working."
>
> Such oversimplications won't bring us anywhere. Sometimes things are hard
> because they ARE hard. Where are people complaining that installing a full
> GNU/Linux distributions from upstream tarballs is hard? How many operators
> here
> use LFS as their distro? If we are okay with using a distro for GNU/Linux,
> why
> using a distro for OpenStack causes so much contention?
>
> >
> > Thanks,
> > Kevin
> >
> > 
> > From: Jay Pipes [jaypi...@gmail.com]
> > Sent: Tuesday, July 03, 2018 10:06 AM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [o

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-04 Thread Dmitry Tantsur

Tried hard to avoid this thread, but this message is so much wrong..

On 07/03/2018 09:48 PM, Fox, Kevin M wrote:

I don't dispute trivial, but a self hosting k8s on bare metal is not incredibly 
hard. In fact, it is easier then you might think. k8s is a platform for 
deploying/managing services. Guess what you need to provision bare metal? Just 
a few microservices. A dhcp service. dhcpd in a daemonset works well. some pxe 
infrastructure. pixiecore with a simple http backend works pretty well in 
practice. a service to provide installation instructions. nginx server handing 
out kickstart files for example. and a place to fetch rpms from in case you 
don't have internet access or want to ensure uniformity. nginx server with a 
mirror yum repo. Its even possible to seed it on minikube and sluff it off to 
its own cluster.

The main hard part about it is currently no one is shipping a reference 
implementation of the above. That may change...

It is certainly much much easier then deploying enough OpenStack to get a self 
hosting ironic working.


Side note: no, it's not. What you describe is similarly hard to installing 
standalone ironic from scratch and much harder than using bifrost for 
everything. Especially when you try to do it in production. Especially with 
unusual operating requirements ("no TFTP servers on my network").


Also, sorry, I cannot resist:
"Guess what you need to orchestrate containers? Just a few things. A container 
runtime. Docker works well. some remove execution tooling. ansible works pretty 
well in practice. It is certainly much much easier then deploying enough k8s to 
get a self hosting containers orchestration working."


Such oversimplications won't bring us anywhere. Sometimes things are hard 
because they ARE hard. Where are people complaining that installing a full 
GNU/Linux distributions from upstream tarballs is hard? How many operators here 
use LFS as their distro? If we are okay with using a distro for GNU/Linux, why 
using a distro for OpenStack causes so much contention?




Thanks,
Kevin


From: Jay Pipes [jaypi...@gmail.com]
Sent: Tuesday, July 03, 2018 10:06 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 07/02/2018 03:31 PM, Zane Bitter wrote:

On 28/06/18 15:09, Fox, Kevin M wrote:

   * made the barrier to testing/development as low as 'curl
http://..minikube; minikube start' (this spurs adoption and
contribution)


That's not so different from devstack though.


   * not having large silo's in deployment projects allowed better
communication on common tooling.
   * Operator focused architecture, not project based architecture.
This simplifies the deployment situation greatly.
   * try whenever possible to focus on just the commons and push vendor
specific needs to plugins so vendors can deal with vendor issues
directly and not corrupt the core.


I agree with all of those, but to be fair to OpenStack, you're leaving
out arguably the most important one:

  * Installation instructions start with "assume a working datacenter"

They have that luxury; we do not. (To be clear, they are 100% right to
take full advantage of that luxury. Although if there are still folks
who go around saying that it's a trivial problem and OpenStackers must
all be idiots for making it look so difficult, they should really stop
embarrassing themselves.)


This.

There is nothing trivial about the creation of a working datacenter --
never mind a *well-running* datacenter. Comparing Kubernetes to
OpenStack -- particular OpenStack's lower levels -- is missing this
fundamental point and ends up comparing apples to oranges.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Puppet] Requirements for running puppet unit tests?

2018-07-03 Thread Dmitry Tantsur

On 07/02/2018 08:57 PM, Lars Kellogg-Stedman wrote:
On Thu, Jun 28, 2018 at 8:04 PM, Lars Kellogg-Stedman > wrote:


What is required to successfully run the rspec tests?


On the odd chance that it might be useful to someone else, here's the Docker 
image I'm using to successfully run the rspec tests for puppet-keystone:


https://github.com/larsks/docker-image-rspec

Available on docker hub  as larsks/rspec.


Nice, thanks! Last time I tried the tests my ruby was too new, so I'll give this 
a try.




Cheers,

--
Lars Kellogg-Stedman mailto:l...@redhat.com>>



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins

2018-06-28 Thread Dmitry Tantsur

On 06/27/2018 03:17 AM, Ghanshyam Mann wrote:




   On Tue, 26 Jun 2018 23:12:30 +0900 Doug Hellmann  
wrote 
  > Excerpts from Matthew Treinish's message of 2018-06-26 09:52:09 -0400:
  > > On Tue, Jun 26, 2018 at 08:53:21AM -0400, Doug Hellmann wrote:
  > > > Excerpts from Andrea Frittoli's message of 2018-06-26 13:35:11 +0100:
  > > > > On Tue, 26 Jun 2018, 1:08 pm Thierry Carrez,  
wrote:
  > > > >
  > > > > > Dmitry Tantsur wrote:
  > > > > > > [...]
  > > > > > > My suggestion: tempest has to be compatible with all supported 
releases
  > > > > > > (of both services and plugins) OR be branched.
  > > > > > > [...]
  > > > > > I tend to agree with Dmitry... We have a model for things that need
  > > > > > release alignment, and that's the cycle-bound series. The reason 
tempest
  > > > > > is branchless was because there was no compatibility issue. If the 
split
  > > > > > of tempest plugins introduces a potential incompatibility, then I 
would
  > > > > > prefer aligning tempest to the existing model rather than introduce 
a
  > > > > > parallel tempest-specific cycle just so that tempest can stay
  > > > > > release-independent...
  > > > > >
  > > > > > I seem to remember there were drawbacks in branching tempest, 
though...
  > > > > > Can someone with functioning memory brain cells summarize them 
again ?
  > > > > >
  > > > >
  > > > >
  > > > > Branchless Tempest enforces api stability across branches.
  > > >
  > > > I'm sorry, but I'm having a hard time taking this statement seriously
  > > > when the current source of tension is that the Tempest API itself
  > > > is breaking for its plugins.
  > > >
  > > > Maybe rather than talking about how to release compatible things
  > > > together, we should go back and talk about why Tempest's API is changing
  > > > in a way that can't be made backwards-compatible. Can you give some more
  > > > detail about that?
  > > >
  > >
  > > Well it's not, if it did that would violate all the stability guarantees
  > > provided by Tempest's library and plugin interface. I've not ever heard of
  > > these kind of backwards incompatibilities in those interfaces and we go to
  > > all effort to make sure we don't break them. Where did the idea that
  > > backwards incompatible changes where being introduced come from?
  >
  > In his original post, gmann said, "There might be some changes in
  > Tempest which might not work with older version of Tempest Plugins."
  > I was surprised to hear that, but I'm not sure how else to interpret
  > that statement.

I did not mean to say that Tempest will introduce the changes in backward 
incompatible way which can break plugins. That cannot happen as all plugins and 
tempest are branchless and they are being tested with master Tempest so if we 
change anything backward incompatible then it break the plugins gate. Even we 
have to remove any deprecated interfaces from Tempest, we fix all plugins first 
like - 
https://review.openstack.org/#/q/topic:remove-support-of-cinder-v1-api+(status:open+OR+status:merged)

What I mean to say here is that adding new or removing deprecated interface in 
Tempest might not work with all released version or unreleased Plugins. That 
point is from point of view of using Tempest and Plugins in production cloud 
testing not gate(where we keep the compatibility). Production Cloud user use 
Tempest cycle based version. Pike based Cloud will be tested by Tempest 17.0.0 
not latest version (though latest version might work).

This thread is not just for gate testing point of view (which seems to be 
always interpreted), this is more for user using Tempest and Plugins for their 
cloud testing. I am looping  operator mail list also which i forgot in initial 
post.

We do not have any tag/release from plugins to know what version of plugin can 
work with what version of tempest. For Example If There is new interface 
introduced by Tempest 19.0.0 and pluginX start using it. Now it can create 
issues for pluginX in both release model 1. plugins with no release (I will 
call this PluginNR), 2. plugins with independent release (I will call it 
PluginIR).

Users (Not Gate) will face below issues:
- User cannot use PluginNR with Tempest <19.0.0 (where that new interface was 
not present). And there is no PluginNR release/tag as this is unreleased and not 
branched software.
- User cannot find a PluginIR particular tag/release which can work with tempest 
<19.0.0 (whe

Re: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins

2018-06-26 Thread Dmitry Tantsur

On 06/26/2018 11:57 AM, Ghanshyam Mann wrote:




   On Tue, 26 Jun 2018 18:37:42 +0900 Dmitry Tantsur  
wrote 
  > On 06/26/2018 11:18 AM, Ghanshyam Mann wrote:
  > > Hello Everyone,
  > >
  > > In Queens cycle,  community goal to split the Tempest Plugin has been 
completed [1] and i think almost all the projects have separate repo for tempest 
plugin [2]. Which means each tempest plugins are being separated from their project 
release model.  Few projects have started the independent release model for their 
plugins like kuryr-tempest-plugin, ironic-tempest-plugin etc [3].  I think 
neutron-tempest-plugin also planning as chatted with amotoki.
  > >
  > > There might be some changes in Tempest which might not work with older 
version of Tempest Plugins.  For example, If I am testing any production cloud which 
has Nova, Neutron, Cinder, Keystone , Aodh, Congress etc  i will be using Tempest and 
Aodh's , Congress's Tempest plugins. With Independent release model of each Tempest 
Plugins, there might be chance that the Aodh's or Congress's Tempest plugin versions 
are not compatible with latest/known Tempest versions. It will become hard to find 
the compatible tag/release of Tempest and Tempest Plugins or in some cases i might 
need to patch up the things.
  >
  > FWIW this is solved by stable branches for all other projects. If we cannot 
keep
  > Tempest compatible with all supported branches, we should back off our 
decision
  > to make it branchless. The very nature of being branchless implies being
  > compatible with all supported releases.
  >
  > >
  > > During QA feedback sessions at Vancouver Summit, there was feedback to 
coordinating the release of all Tempest plugins and Tempest [4] (also amotoki talked 
to me on this as neutron-tempest-plugin is planning their first release). Idea is to 
release/tag all the Tempest plugins and Tempest together so that specific release/tag 
can be identified as compatible version of all the Plugins and Tempest for testing 
the complete stack. That way user can get to know what version of Tempest Plugins is 
compatible with what version of Tempest.
  > >
  > > For above use case, we need some coordinated release model among Tempest and all the Tempest 
Plugins. There should be single release of all Tempest Plugins with well defined tag whenever any Tempest 
release is happening.  For Example, Tempest version 19.0.0 is to mark the "support of the Rocky 
release". When releasing the Tempest 19.0, we will release all the Tempest plugins also to tag the 
compatibility of plugins with Tempest for "support of the Rocky release".
  > >
  > > One way to make this coordinated release (just a initial thought):
  > > 1. Release Each Tempest Plugins whenever there is any major version 
release of Tempest (like marking the support of OpenStack release in Tempest, EOL of 
OpenStack release in Tempest)
  > >  1.1 Each plugin or Tempest can do their intermediate release of 
minor version change which are in backward compatible way.
  > >  1.2 This coordinated Release can be started from latest Tempest 
Version for simple reading.  Like if we start this coordinated release from Tempest 
version 19.0.0 then,
  > >  each plugins will be released as 19.0.0 and so on.
  > >
  > > Giving the above background and use case of this coordinated release,
  > > A) I would like to ask each plugins owner if you are agree on this 
coordinated release?  If no, please give more feedback or issue we can face due to 
this coordinated release.
  >
  > Disclaimer: I'm not the PTL.
  >
  > Similarly to Luigi, I don't feel well about forcing a plugin release at the 
same
  > time as a tempest release, UNLESS tempest folks are going to coordinate 
their
  > releases with all how-many-do-we-have plugins. What I'd like to avoid is 
cutting
  > a release in the middle of a patch chain or some refactoring just because
  > tempest happened to have a release right now.

I understand your point. But we can avoid that case if we only coordinate on 
major version bump only. as i mentioned in 1.2 point, Tempest and Tempest 
plugins can do their intermediate release anytime which are nothing but 
backward compatible release. In this proposed model, we can do a coordinated 
release for major version bump only which is happening only on OpenStack 
release and EOL of any stable branch.


Even bigger concern: what if the plugin is actually not compatible yet? Say, 
you're releasing tempest 19.0. As the same point you're cutting 
ironic-tempest-plugin 19.0. Who guarantees that they're compatible? If we 
haven't had any patches for it in a month, it may well happen that it does not work.




Or I am all open to have another release model which can be best suited for all 
plugins which can address the me

Re: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins

2018-06-26 Thread Dmitry Tantsur

On 06/26/2018 11:18 AM, Ghanshyam Mann wrote:

Hello Everyone,

In Queens cycle,  community goal to split the Tempest Plugin has been completed 
[1] and i think almost all the projects have separate repo for tempest plugin 
[2]. Which means each tempest plugins are being separated from their project 
release model.  Few projects have started the independent release model for 
their plugins like kuryr-tempest-plugin, ironic-tempest-plugin etc [3].  I 
think neutron-tempest-plugin also planning as chatted with amotoki.

There might be some changes in Tempest which might not work with older version 
of Tempest Plugins.  For example, If I am testing any production cloud which 
has Nova, Neutron, Cinder, Keystone , Aodh, Congress etc  i will be using 
Tempest and Aodh's , Congress's Tempest plugins. With Independent release model 
of each Tempest Plugins, there might be chance that the Aodh's or Congress's 
Tempest plugin versions are not compatible with latest/known Tempest versions. 
It will become hard to find the compatible tag/release of Tempest and Tempest 
Plugins or in some cases i might need to patch up the things.


FWIW this is solved by stable branches for all other projects. If we cannot keep 
Tempest compatible with all supported branches, we should back off our decision 
to make it branchless. The very nature of being branchless implies being 
compatible with all supported releases.




During QA feedback sessions at Vancouver Summit, there was feedback to 
coordinating the release of all Tempest plugins and Tempest [4] (also amotoki 
talked to me on this as neutron-tempest-plugin is planning their first 
release). Idea is to release/tag all the Tempest plugins and Tempest together 
so that specific release/tag can be identified as compatible version of all the 
Plugins and Tempest for testing the complete stack. That way user can get to 
know what version of Tempest Plugins is compatible with what version of Tempest.

For above use case, we need some coordinated release model among Tempest and all the Tempest 
Plugins. There should be single release of all Tempest Plugins with well defined tag whenever any 
Tempest release is happening.  For Example, Tempest version 19.0.0 is to mark the "support of 
the Rocky release". When releasing the Tempest 19.0, we will release all the Tempest plugins 
also to tag the compatibility of plugins with Tempest for "support of the Rocky release".

One way to make this coordinated release (just a initial thought):
1. Release Each Tempest Plugins whenever there is any major version release of 
Tempest (like marking the support of OpenStack release in Tempest, EOL of 
OpenStack release in Tempest)
 1.1 Each plugin or Tempest can do their intermediate release of minor 
version change which are in backward compatible way.
 1.2 This coordinated Release can be started from latest Tempest Version 
for simple reading.  Like if we start this coordinated release from Tempest 
version 19.0.0 then,
 each plugins will be released as 19.0.0 and so on.

Giving the above background and use case of this coordinated release,
A) I would like to ask each plugins owner if you are agree on this coordinated 
release?  If no, please give more feedback or issue we can face due to this 
coordinated release.


Disclaimer: I'm not the PTL.

Similarly to Luigi, I don't feel well about forcing a plugin release at the same 
time as a tempest release, UNLESS tempest folks are going to coordinate their 
releases with all how-many-do-we-have plugins. What I'd like to avoid is cutting 
a release in the middle of a patch chain or some refactoring just because 
tempest happened to have a release right now.




If we get the agreement from all Plugins then,
B) Release team or TC help to find the better model for this use case or 
improvement in  above model.

C) Once we define the release model, find out the team owning that release 
model (there are more than 40 Tempest plugins currently) .

NOTE: Till we decide the best solution for this use case, each plugins can 
do/keep doing their plugin release as per independent release model.

[1] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html
[2] https://docs.openstack.org/tempest/latest/plugin-registry.html
[3] https://github.com/openstack/kuryr-tempest-plugin/releases
https://github.com/openstack/ironic-tempest-plugin/releases
[4] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131011.html


-gmann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [sdk] announcing first release of rust-openstack (+ call for contributors)

2018-06-18 Thread Dmitry Tantsur

Hi all,

I'd like to announce my hobby project that I've been working on for some time. 
rust-openstack [1], as the name suggests, is an SDK for OpenStack written in 
Rust! I released version 0.1.0 last week, and now the project is ready for early 
testers and contributors.


Currently only a small subset of Compute, Networking and Image API is 
implemented, as well as password authentication against Identity v3. If you're 
interested in the Rust language, this may be your chance :) I have written a 
short contributor's guide [2] to help understanding the code structure.


Special thanks to the OpenLab project for providing the CI for the project.

Cheers,
Dmitry

[1] https://docs.rs/openstack/latest/openstack/
[2] https://github.com/dtantsur/rust-openstack/blob/master/CONTRIBUTING.md

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] config-download/ansible next steps

2018-06-18 Thread Dmitry Tantsur

On 06/13/2018 03:17 PM, James Slagle wrote:

On Wed, Jun 13, 2018 at 6:49 AM, Dmitry Tantsur  wrote:

Slightly hijacking the thread to provide a status update on one of the items
:)


Thanks for jumping in.



The immediate plan right now is to wait for metalsmith 0.4.0 to hit the
repositories, then start experimenting. I need to find a way to
1. make creating nova instances no-op
2. collect the required information from the created stack (I need networks,
ports, hostnames, initial SSH keys, capabilities, images)
3. update the config-download code to optionally include the role [2]
I'm not entirely sure where to start, so any hints are welcome.


Here are a couple of possibilities.

We could reuse the OS::TripleO::{{role.name}}Server mappings that we
already have in place for pre-provisioned nodes (deployed-server).
This could be mapped to a template that exposes some Ansible tasks as
outputs that drives metalsmith to do the deployment. When
config-download runs, it would execute these ansible tasks to
provision the nodes with Ironic. This has the advantage of maintaining
compatibility with our existing Heat parameter interfaces. It removes
Nova from the deployment so that from the undercloud perspective you'd
roughly have:

Mistral -> Heat -> config-download -> Ironic (driven via ansible/metalsmith)


One thing that came to my mind while planning this work is that I'd prefer all 
nodes to be processed in one step. This will help avoiding some issues that we 
have now. For example, the following does not work reliably:


 compute-0: just any profile:compute
 compute-1: precise node=abcd
 control-0: any node

This has two issues that will pop up randomly:
1. compute-0 can pick node abcd designated for compute-1
2. control-0 can pick a compute node, failing either compute-0 or compute-1

This problem is hard to fix if all deployment requests are processed separately, 
but is quite trivial if the decision is done based on the whole deployment plan. 
I'm going to work on a bulk scheduler like that in metalsmith.




A further (or completely different) iteration might look like:

Step 1: Mistral -> Ironic (driven via ansible/metalsmith)
Step 2: Heat -> config-download


Step 1 will still use provided environment to figure out the count of nodes for 
each role, their images, capabilities and (optionally) precise node scheduling?
I'm a bit worried about the last bit: IIRC we rely on Heat's %index% variable 
currently. We can, of course, ask people to replace it with something more 
explicit on upgrade.




Step 2 would use the pre-provisioned node (deployed-server)  feature
already existing in TripleO and treat the just provisioned by Ironic
nodes, as pre-provisioned from the Heat stack perspective. Step 1 and
Step 2 would also probably be driven by a higher level Mistral
workflow. This has the advantage of minimal impact to
tripleo-heat-templates, and also removes Heat from the baremetal
provisioning step. However, we'd likely need some python compatibility
libraries that could translate Heat parameter values such as
HostnameMap to ansible vars for some basic backwards compatibility.


Overall, I like this option better. It will allow an operator to isolate the 
bare metal provisioning step from everything else.






[1] https://github.com/openstack/metalsmith
[2] https://metalsmith.readthedocs.io/en/latest/user/ansible.html



Obviously we have things to consider here such as backwards compatibility
and
upgrades, but overall, I think this would be a great simplification to our
overall deployment workflow.



Yeah, this is tricky. Can we make Heat "forget" about Nova instances? Maybe
by re-defining them to OS::Heat::None?


Not exactly, as Heat would delete the previous versions of the
resources. We'd need some special migrations, or could support the
existing method forever for upgrades, and only deprecate it for new
deployments.


Do I get it right that if we redefine OS::TripleO::{{role.name}}Server to be 
OS::Heat::None, Heat will delete the old {{role.name}}Server instances on the 
next update? This is sad..


I'd prefer not to keep Nova support forever, this is going to be hard to 
maintain and cover by the CI. Should we extend Heat to support "forgetting" 
resources? I think it may have a use case outside of TripleO.




I'd like to help with this work. I'll start by taking a look at what
you've got so far. Feel free to reach out if you'd like some
additional dev assistance or testing.



Thanks!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] config-download/ansible next steps

2018-06-13 Thread Dmitry Tantsur

Slightly hijacking the thread to provide a status update on one of the items :)

On 06/12/2018 07:04 PM, James Slagle wrote:

I wanted to provide an update on some next steps around config-download/Ansible
and TripleO. Now that we've completed transitioning to config-download by
default in Rocky, some might be wondering where we're going next.





4. Ansible driven baremetal deployment

Dmitry Tantsur has indicated he's going to be looking at driving TripleO
baremetal provisioning with Ironic and ansible directly. This would remove
Heat+Nova from the baremetal provisioning workflows we currently use.


I'm actually already looking, my efforts just have not become visible yet.

I started with reviving my old metalsmith project [1] to host the code we need 
to make this happen. This now has a CLI tool and a very dump (for now) ansible 
role [2] to drive it.


Why a new tool? First, I want it to be reusable outside of TripleO (and outside 
of ansible modules), thus I don't want to put the code directly into, say, 
tripleo-common. Second, the current OpenStack Ansible modules are not quite 
sufficient for the task:


1. Both the os_ironic_node module and the underlying openstacksdk library lack 
support for the critically important VIF attachment API. I'm working on 
addressing that, but it will take substantial time (e.g. we need to stabilize 
the microversion support in openstacksdk).


2. Missing support for building configdrive. Again, can probably be added to 
openstacksdk, and I'll get to it one day.


3. No bulk operations. There is no way, to my best knowledge (please tell me I'm 
wrong), to provision several nodes in parallel via the current ansible modules. 
It is probably solvable via a new ansible module, but also see the next points.


4. No scheduling. That is, there is no way out-of-box to pick a suitable node 
for deployment. It can be done in pure ansible in the simplest case, but our 
case is not the simplest. Particularly, I don't want to end up parsing 
capabilities in ansible :) Also one of the goals of this work is to provide 
better messages than "No valid hosts found".


5. On top of #3 and #4, it is not possible to operate at the deployment level, 
not on the node level. From the current Heat stack we're going to receive a list 
of overcloud instances with their roles and other parameters. Some code has to 
take this input and make a decision on whether to deploy/undeploy something. 
It's currently done by Heat+Nova together, but they're not doing a great job in 
some corner cases. Particularly, replacing a node may be painful.


So, while I do plan to solve #1 and #2 eventually, #3 - #5 require some place to 
put the logic. Putting it to TripleO or to ansible itself will preclude reusing 
it outside of TripleO and ansible accordingly. So, metalsmith is this place for 
now. I think in the far future I will try proposing a module to ansible itself 
that will handle #3 - #5 and will be backed by metalsmith. It will probably have 
a similar interface to the current PoC role [2].


The immediate plan right now is to wait for metalsmith 0.4.0 to hit the 
repositories, then start experimenting. I need to find a way to

1. make creating nova instances no-op
2. collect the required information from the created stack (I need networks, 
ports, hostnames, initial SSH keys, capabilities, images)

3. update the config-download code to optionally include the role [2]
I'm not entirely sure where to start, so any hints are welcome.

[1] https://github.com/openstack/metalsmith
[2] https://metalsmith.readthedocs.io/en/latest/user/ansible.html



Obviously we have things to consider here such as backwards compatibility and
upgrades, but overall, I think this would be a great simplification to our
overall deployment workflow.



Yeah, this is tricky. Can we make Heat "forget" about Nova instances? Maybe by 
re-defining them to OS::Heat::None?


Dmitry


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection)

2018-06-11 Thread Dmitry Tantsur

Hi,

On 06/11/2018 03:53 PM, Ruby Loo wrote:

Hi,

I don't want to hijack the initial thread, but am now feeling somewhat guilty 
about not being vocal wrt Storyboard. Yes, ironic migrated to Storyboard in the 
beginning of this cycle. To date, I have not been pleased with replacing 
Launchpad with Storyboard. I believe that Storyboard is somewhat 
still-in-progress, and that there were/are some features (stories) that are 
outstanding that would make its use better.


 From my point of view (as a developer and core, not a project manager or PTL) 
using Storyboard has made my day-to-day work worse. Granted, any migration is 
without headaches. But some of the main things, like searching for our RFEs 
(that we had tagged in Launchpad) wasn't possible. I haven't yet figured out how 
to limit a search to only the 'ironic' project using that 'search' like GUI, so 
I have been frustrated trying to find particular bugs that I *knew* existed but 
had not memorized the bug number.


Yeah, I cannot fully understand the search. I would expect something explicit 
like Launchpad or better something command-based like "project:openstack/ironic 
pxe". This does not seem to work, so I also wonder how to filter all stories 
affecting a project.


Bonus point for giving stories names. They don't even have to be unique, but I 
have something like


 https://storyboard.openstack.org/#!/story/100500-some-readable-slug/

(where 100500 is an actual story ID) it would help my browser locate them in my 
history.




I haven't been as involved upstream this cycle, so perhaps I have missed other 
emails that have mentioned how to get around or do things with Storyboard. I 
would caution folks that are thinking about migrating; I wish we had delayed it 
until there was better support/features/stories implemented with Storyboard. At 
the time, I was also negligent about actually trying out Storyboard before we 
pushed the button (because I assumed it would be ok, others were using it, why 
wouldn't it suffice?) Perhaps Storyboard can address most of my issues now? 
Maybe updated documentation would help? (I believe the last time I tried to use 
Storyboard was 2 weeks ago, when I was 'search'ing for an old bug in Storyboard. 
I gave up.)


I apologize for not writing a detailed email with specific examples of what is 
lacking (for me) and in hindsight should have sent out email at the time I 
encountered issues/had questions. I guess I am hoping that others can 
fill-in-the-blanks and ask for things that would make Storyboard (more) usable.


No, I didn't watch any videos about using Storyboard, just like I've never 
watched any video about using Launchpad, Trello, jira, . I did try 
looking for documentation at some point though and I don't recall finding what I 
was looking for.


--ruby


On Thu, Jun 7, 2018 at 4:25 PM, Kendall Nelson > wrote:


Hello :)

I think that these two goals definitely fit the criteria we discussed in
Vancouver during the S Release Goal Forum Session. I know Storyboard
Migration was also mentioned after I had to dip out to another session so I
wanted to follow up on that.

I know it doesn't fit the shiny user facing docket that was discussed at the
Forum, but I do think its time we make migration official in some capacity
as a release goal or some other way. Having migrated Ironic and having
TripleO on the schedule for migration (as requested during the last goal
discussion) in addition to having migrated Heat, Barbican and several others
in the last few months we have reached the point that I think migration of
the rest of the projects is attainable by the end of Stein.

Thoughts?

-Kendall (diablo_rojo)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] openstack-tox-validate: python setup.py check --restructuredtext --strict

2018-06-06 Thread Dmitry Tantsur
In Ironic world we run doc8 on README.rst as part of the pep8 job. Maybe we 
should make it a common practice?


On 06/06/2018 03:35 PM, Jeremy Stanley wrote:

On 2018-06-06 16:36:45 +0900 (+0900), Akihiro Motoki wrote:
[...]

In addition, unfortunately such checks are not run in project gate,
so there is no way to detect in advance.
I think we need a way to check this when a change is made
instead of detecting an error when a release patch is proposed.


While I hate to suggest yet another Python PTI addition, for my
personal projects I test every commit (essentially a check/gate
pipeline job) with:

 python setup.py check --restructuredtext --strict
 python setup.py bdist_wheel sdist

...as proof that it hasn't broken sdist/wheel building nor regressed
the description-file provided in my setup.cfg. My intent is to add
other release artifact tests into the same set so that there are no
surprises come release time.

We sort of address this case in OpenStack projects by forcing sdist
builds in our standard pep8 jobs, so maybe that would be a
lower-overhead place to introduce the setup rst check?
Brainstorming.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-05-30 Thread Dmitry Tantsur

On 05/30/2018 03:54 PM, Julia Kreger wrote:

On Tue, May 29, 2018 at 7:42 PM, Zane Bitter  wrote:
[trim]

Since I am replying to this thread, Julia also mentioned the situation where
two core reviewers are asking for opposite changes to a patch. It is never
ever ever the contributor's responsibility to resolve a dispute between two
core reviewers! If you see a core reviewer's advice on a patch and you want
to give the opposite advice, by all means take it up immediately - with *the
other core reviewer*. NOT the submitter. Preferably on IRC and not in the
review. You work together every day, you can figure it out! A random
contributor has no chance of parachuting into the middle of that dynamic and
walking out unscathed, and they should never be asked to.



Absolutely agree! In the case that was in mind where it has happened
to me personally, I think it was 10-15 revisions apart, so it becomes
a little hard to identify when your starting to play the game of "make
the cores happy to land it". It is not a fun game for the contributor.
Truthfully it caused me to add the behavior of intentionally waiting
longer between uploads of new revisions... which does not help at all.

The other conundrum is when you disagree and the person has left a -1
which blocks forward progress and any additional reviews since it gets
viewed as "not ready", which makes it even harder and slower to build
consensus. At some point you get into "Oh, what formatting can I
change to clear that -1 because the person is not responding" mode.


This, by the way, is a much broader and interesting question. In case of a 
by-passer leaving a comment ("this link must be https") and disappearing, the 
PTL or any core can remove the reviewer from the review. What to do with a core 
leaving a comment or non-core leaving a potentially useful comment and going to PTO?




At least beginning to shift the review culture should help many of these issues.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-05-30 Thread Dmitry Tantsur

Hi,

This is a great discussion and a great suggestion overall, but I'd like to add a 
grain of salt here, especially after reading some comments.


Nitpicking is bad, no disagreement. However, I don't like this whole discussion 
to end up marking -1's as offense or aggression. Just as often as I see 
newcomers proposing patches frustrated with many iterations, I see newcomers 
being afraid to -1.


In my personal experience I have two remarkable cases:
1. A person asking me (via a private message) to not put -1 on their patches 
because they may have problems with their managers.
2. A person proposing a follow-up on *any* comment to their patch, including 
important ones.


Whatever decision the TC takes, I would like it to make sure that we don't paint 
putting -1 as a bad act. Nor do I want "if you care, just follow-up" to be an 
excuse for putting up bad contributions.


Additionally, I would like to have something saying that a -1 is valid and 
appropriate, if a contribution substantially increases the project's technical 
debt. After already spending *days* refactoring ironic unit tests, I will -1 the 
hell out of a patch that will try to bring them back to their initial state, I 
promise :)


Dmitry

On 05/29/2018 03:55 PM, Julia Kreger wrote:

During the Forum, the topic of review culture came up in session after
session. During these discussions, the subject of our use of nitpicks
were often raised as a point of contention and frustration, especially
by community members that have left the community and that were
attempting to re-engage the community. Contributors raised the point
of review feedback requiring for extremely precise English, or
compliance to a particular core reviewer's style preferences, which
may not be the same as another core reviewer.

These things are not just frustrating, but also very inhibiting for
part time contributors such as students who may also be time limited.
Or an operator who noticed something that was clearly a bug and that
put forth a very minor fix and doesn't have the time to revise it over
and over.

While nitpicks do help guide and teach, the consensus seemed to be
that we do need to shift the culture a little bit. As such, I've
proposed a change to our principles[1] in governance that attempts to
capture the essence and spirit of the nitpicking topic as a first
step.

-Julia
-
[1]: https://review.openstack.org/570940

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposing Mark Goddard to ironic-core

2018-05-23 Thread Dmitry Tantsur

On 05/20/2018 04:45 PM, Julia Kreger wrote:

Greetings everyone!

I would like to propose Mark Goddard to ironic-core. I am aware he recently 
joined kolla-core, but his contributions in ironic have been insightful and 
valuable. The kind of value that comes from operative use.


I also make this nomination knowing that our community landscape is changing and 
that we must not silo our team responsibilities or ability to move things 
forward to small highly focused team. I trust Mark to use his judgement as he 
has time or need to do so. He might not always have time, but I think at the end 
of the day, we’re all in that same boat.


I'm not sure I understand the first sentence, but I'm fully in support of adding 
Mark anyway.




-Julia



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api] late addition to forum schedule

2018-05-18 Thread Dmitry Tantsur

On 05/18/2018 12:23 AM, Matt Riedemann wrote:

On 5/17/2018 11:02 AM, Doug Hellmann wrote:

After some discussion on twitter and IRC, we've added a new session to
the Forum schedule for next week to discuss our options for cleaning up
some of the design/technical debt in our REST APIs.


Not to troll too hard here, but it's kind of frustrating to see that twitter 
trumps people actually proposing sessions on time and then having them be rejected.



The session description:

   The introduction of microversions in OpenStack APIs added a
   mechanism to incrementally change APIs without breaking users.
   We're now at the point where people would like to start making
   old things go away, which means we need to hammer out a plan and
   potentially put it forward as a community goal.

[1]https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21881/api-debt-cleanup 



This also came up at the Pike PTG in Atlanta:

https://etherpad.openstack.org/p/ptg-architecture-workgroup

See the "raising the minimum microversion" section. The TODO was Ironic was 
going to go off and do this and see how much people freaked out. What's changed 
since then besides that not happening? Since I'm not on twitter, I don't know 
what new thing prompted this.




Jim was driving this effort, then he left and it went into limbo. I'm not sure 
we're still interested in doing that, given the overall backlog.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Encrypted swift volumes by default in the undercloud

2018-05-16 Thread Dmitry Tantsur

Hi,

On 05/15/2018 09:19 PM, Juan Antonio Osorio wrote:

Hello!

As part of the work from the Security Squad, we added the ability for the 
containerized undercloud to encrypt the overcloud plans. This is done by 
enabling Swift's encrypted volumes, which require barbican. Right now it's 
turned off, but I would like to enable it by default [1]. What do you folks think?


I like the idea, but I'm a bit skeptical about adding a new service to already 
quite bloated undercloud. Why is barbican a hard requirement here?




[1] https://review.openstack.org/#/c/567200/

BR

--
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][stable] Re-adding Jim Rollenhagen to ironic stable maintenance team?

2018-05-11 Thread Dmitry Tantsur

Hi,

Funny, I was just about to ask you about it :) Jim is a former PTL, so I cannot 
see why we wouldn't add him to the stable team.


On 05/11/2018 01:50 PM, Julia Kreger wrote:

Greetings folks,

Is there any objection if we re-add Jim to the ironic-stable-maint
team?  He was a member prior to his brief departure and I think it
would be good to have another set of hands that can approve the
changes as three doesn't seem like quite enough when everyone is busy.

If there are no objections, I'll re-add him next week.


I don't remember if we actually can add people to these teams or it has to be 
done by the main stable team.




-Julia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Monthly bug day?

2018-04-30 Thread Dmitry Tantsur
I've created a bluejeans channel for this meeting: 
https://bluejeans.com/309964257. I may be late for it, but I've set it up to be 
usable even without me.


On 04/30/2018 02:39 PM, Michael Turek wrote:

Just tried this and seems like Firefox does still require a browser plugin.

Julia, could we use your bluejeans line again?

Thanks!
Mike Turek 


On 4/30/18 7:33 AM, Dmitry Tantsur wrote:

Hi,

On 04/29/2018 10:17 PM, Michael Turek wrote:
Awesome! If everyone doesn't mind the short notice, we'll have it again this 
Thursday @ 1:00 PM to 3:00 PM UTC.


++



I can provide video conferencing through hangouts here https://goo.gl/xSKBS4
Let's give that a shot this time!


Note that the last time I checked Hangouts video messaging required a 
proprietary browser plugin (and hence did not work in Firefox). Using it may 
exclude people not accepting proprietary software and/or avoiding using Chromium.




We can adjust times, tooling, and regular agenda over the next couple 
meetings and see where we settle. If anyone has any questions or suggestions, 
don't hesitate to reach out to me!


Thanks,
Mike Turek 


On 4/25/18 12:11 PM, Julia Kreger wrote:

On Mon, Apr 23, 2018 at 12:04 PM, Michael Turek
<mjtu...@linux.vnet.ibm.com> wrote:


What does everyone think about having Bug Day the first Thursday of every
month?

All for it!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [all] The last reminder about the classic drivers removal

2018-04-30 Thread Dmitry Tantsur

Hi all,

This is the last reminder that the classic drivers will be removed from ironic. 
We plan on finish the removal before Rocky-2. See below for the information on 
migration.


If for some reason we need to delay the removal, please speak up NOW. Note that 
I'm personally not inclined to delay it past Rocky, since it requires my time 
and effort to track this process.


Cheers,
Dmitry

On 03/06/2018 12:11 PM, Dmitry Tantsur wrote:

Hi all,

As you may already know, we have deprecated classic drivers in the Queens 
release. We don't have specific removal plans yet. But according to the 
deprecation policy we may remove them at any time after May 1st, which will be 
half way to Rocky milestone 2. Personally, I'd like to do it around then.


The `online_data_migrations` script will handle migrating nodes, if all required 
hardware interfaces and types are enabled before the upgrade to Queens. 
Otherwise, check the documentation [1] on how to update your nodes.


Dmitry

[1] 
https://docs.openstack.org/ironic/latest/admin/upgrade-to-hardware-types.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Monthly bug day?

2018-04-30 Thread Dmitry Tantsur

Hi,

On 04/29/2018 10:17 PM, Michael Turek wrote:
Awesome! If everyone doesn't mind the short notice, we'll have it again this 
Thursday @ 1:00 PM to 3:00 PM UTC.


++



I can provide video conferencing through hangouts here https://goo.gl/xSKBS4
Let's give that a shot this time!


Note that the last time I checked Hangouts video messaging required a 
proprietary browser plugin (and hence did not work in Firefox). Using it may 
exclude people not accepting proprietary software and/or avoiding using Chromium.




We can adjust times, tooling, and regular agenda over the next couple meetings 
and see where we settle. If anyone has any questions or suggestions, don't 
hesitate to reach out to me!


Thanks,
Mike Turek 


On 4/25/18 12:11 PM, Julia Kreger wrote:

On Mon, Apr 23, 2018 at 12:04 PM, Michael Turek
 wrote:


What does everyone think about having Bug Day the first Thursday of every
month?

All for it!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-27 Thread Dmitry Tantsur
Okay, it seems like the idea was not well received, but I do have some action 
items out of the discussion (thanks all!):


1. Simplify running cleaning per node. I've proposed patches [0] to add a new 
command (documentation to follow) to do it.


2. Consider running metadata cleaning during deployment in Ironic. This is a bit 
difficult right now, but will simplify substantially after the deploy steps work.


Any other ideas?

I would like to run at least one TripleO CI job with cleaning enabled. Any 
objections to that? If not, what would be the best job (it has to run ironic, 
obviously)?


[0] https://review.openstack.org/#/q/topic:cleaning+status:open

On 04/25/2018 03:14 PM, Dmitry Tantsur wrote:

Hi all,

I'd like to restart conversation on enabling node automated cleaning by default 
for the undercloud. This process wipes partitioning tables (optionally, all the 
data) from overcloud nodes each time they move to "available" state (i.e. on 
initial enrolling and after each tear down).


We have had it disabled for a few reasons:
- it was not possible to skip time-consuming wiping if data from disks
- the way our workflows used to work required going between manageable and 
available steps several times


However, having cleaning disabled has several issues:
- a configdrive left from a previous deployment may confuse cloud-init
- a bootable partition left from a previous deployment may take precedence in 
some BIOS
- an UEFI boot partition left from a previous deployment is likely to confuse 
UEFI firmware
- apparently ceph does not work correctly without cleaning (I'll defer to the 
storage team to comment)


For these reasons we don't recommend having cleaning disabled, and I propose to 
re-enable it.


It has the following drawbacks:
- The default workflow will require another node boot, thus becoming several 
minutes longer (incl. the CI)

- It will no longer be possible to easily restore a deleted overcloud node.

What do you think? If I don't hear principal objections, I'll prepare a patch in 
the coming days.


Dmitry



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-27 Thread Dmitry Tantsur

Hi Tim,

On 04/26/2018 07:16 PM, Tim Bell wrote:

My worry with changing the default is that it would be like adding the 
following in /etc/environment,

alias ls=' rm -rf / --no-preserve-root'

i.e. an operation which was previously read-only now becomes irreversible.


Well, deleting instances has never been read-only :) The problem really is that 
Heat can delete instances during a seemingly innocent operations. And I do agree 
that we cannot just ignore this problem.




We also have current use cases with Ironic where we are moving machines between 
projects by 'disowning' them to the spare pool and then reclaiming them (by 
UUID) into new projects with the same state.


I'd be curious to hear how exactly it works. Does it work on Nova level or on 
Ironic level?




However, other operators may feel differently which is why I suggest asking 
what people feel about changing the default.

In any case, changes in default behaviour need to be highly visible.

Tim

-Original Message-
From: "arkady.kanev...@dell.com" <arkady.kanev...@dell.com>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: Thursday, 26 April 2018 at 18:48
To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

 +1.
 It would be good to also identify the use cases.
 Surprised that node should be cleaned up automatically.
 I would expect that we want it to be a deliberate request from 
administrator to do.
 Maybe user when they "return" a node to free pool after baremetal usage.
 Thanks,
 Arkady
 
 -Original Message-

 From: Tim Bell [mailto:tim.b...@cern.ch]
 Sent: Thursday, April 26, 2018 11:17 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by 
default?
 
 How about asking the operators at the summit Forum or asking on openstack-operators to see what the users think?
 
 Tim
 
 -Original Message-

 From: Ben Nemec <openst...@nemebean.com>
 Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
 Date: Thursday, 26 April 2018 at 17:39
 To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>, Dmitry Tantsur <dtant...@redhat.com>
 Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by 
default?
 
 
 
 On 04/26/2018 09:24 AM, Dmitry Tantsur wrote:

 > Answering to both James and Ben inline.
 >
 > On 04/25/2018 05:47 PM, Ben Nemec wrote:
 >>
 >>
 >> On 04/25/2018 10:28 AM, James Slagle wrote:
 >>> On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur
 >>> <dtant...@redhat.com> wrote:
 >>>> On 04/25/2018 04:26 PM, James Slagle wrote:
 >>>>>
 >>>>> On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur 
<dtant...@redhat.com>
 >>>>> wrote:
 >>>>>>
 >>>>>> Hi all,
 >>>>>>
 >>>>>> I'd like to restart conversation on enabling node automated
 >>>>>> cleaning by
 >>>>>> default for the undercloud. This process wipes partitioning 
tables
 >>>>>> (optionally, all the data) from overcloud nodes each time they
 >>>>>> move to
 >>>>>> "available" state (i.e. on initial enrolling and after each tear
 >>>>>> down).
 >>>>>>
 >>>>>> We have had it disabled for a few reasons:
 >>>>>> - it was not possible to skip time-consuming wiping if data from
 >>>>>> disks
 >>>>>> - the way our workflows used to work required going between
 >>>>>> manageable
 >>>>>> and
 >>>>>> available steps several times
 >>>>>>
 >>>>>> However, having cleaning disabled has several issues:
 >>>>>> - a configdrive left from a previous deployment may confuse
 >>>>>> cloud-init
 >>>>>> - a bootable partition left from a previous deployment may take
 >>>>>> precedence
 >>>>>> in some BIOS
 >>>>>> - an UEFI 

Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-26 Thread Dmitry Tantsur

On 04/26/2018 05:12 PM, James Slagle wrote:

On Thu, Apr 26, 2018 at 10:24 AM, Dmitry Tantsur <dtant...@redhat.com> wrote:

Answering to both James and Ben inline.


On 04/25/2018 05:47 PM, Ben Nemec wrote:




On 04/25/2018 10:28 AM, James Slagle wrote:


On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur <dtant...@redhat.com>
wrote:


On 04/25/2018 04:26 PM, James Slagle wrote:
Well, it's not clear what is "safe" here: protect people who explicitly
delete their stacks or protect people who don't realize that a previous
deployment may screw up their new one in a subtle way.



The latter you can recover from, the former you can't if automated
cleaning is true.



Nor can we recover from 'rm -rf / --no-preserve-root', but it's not a reason
to disable the 'rm' command :)


This is a really disingenuous comparison. If you really want to
compare these things with what you're proposing, then it would be to
make --no-preserve-root the default with rm. Which it is not.


If we really go down this path, what TripleO does right now is removing the 'rm' 
command by default and saying "well, you can install it back, if you realize you 
cannot work without it" :)








It's not just about people who explicitly delete their stacks (whether
intentional or not). There could be user error (non-explicit) or
side-effects triggered by Heat that could cause nodes to get deleted.



If we have problems with Heat, we should fix Heat or stop using it. What
you're saying is essentially "we prevent ironic from doing the right thing
because we're using a tool that can invoke 'rm -rf /' at a wrong moment."


Agreed on the Heat point, and once/if we're there, I'd probably not
object to making automated clean the default.

I disagree on how you characterized what I'm saying. I'm not proposing
to prevent Ironic from doing the right thing. If people want to use
automated cleaning, they can. Nothing will prevent that. It just
shouldn't be the default.


It's not about "want to use". It's about "we don't guarantee the correct 
behavior in presence of previous deployments on non-root disks" and "if you use 
ceph, you must use cleaning".








You couldn't recover from those scenarios if automated cleaning were
true. Whereas you could always fix a deployment error by opting in to
do an automated clean. Does Ironic keep track of it a node has been
previously cleaned? Could we add a validation to check whether any
nodes might be used in the deployment that were not previously
cleaned?



It's may be possible possible to figure out if a node was ever cleaned. But
then we'll force operators to invoke cleaning manually, right? It will work,
but that's another step on the default workflow. Are you okay with it?


I would be ok with it. But I don't even characterize it as a
completely necessary step on the default workflow. It fixes some
issues as you've pointed out, but also comes with a cost. What we're
discussing is whether it's the default or not. If it is not true by
default, then we wouldn't make it a required step in the default
workflow to make sure it's done. It'd be documented as choice.



Sure, but how do people know if they want it? Okay, if they use Ceph, they have 
to. Then.. mm.. "if you have multiple disks and you're not sure what's on them, 
please clean"? It may work, I wonder how many people will care to follow it though.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-26 Thread Dmitry Tantsur

Answering to both James and Ben inline.

On 04/25/2018 05:47 PM, Ben Nemec wrote:



On 04/25/2018 10:28 AM, James Slagle wrote:

On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur <dtant...@redhat.com> wrote:

On 04/25/2018 04:26 PM, James Slagle wrote:


On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur <dtant...@redhat.com>
wrote:


Hi all,

I'd like to restart conversation on enabling node automated cleaning by
default for the undercloud. This process wipes partitioning tables
(optionally, all the data) from overcloud nodes each time they move to
"available" state (i.e. on initial enrolling and after each tear down).

We have had it disabled for a few reasons:
- it was not possible to skip time-consuming wiping if data from disks
- the way our workflows used to work required going between manageable
and
available steps several times

However, having cleaning disabled has several issues:
- a configdrive left from a previous deployment may confuse cloud-init
- a bootable partition left from a previous deployment may take
precedence
in some BIOS
- an UEFI boot partition left from a previous deployment is likely to
confuse UEFI firmware
- apparently ceph does not work correctly without cleaning (I'll defer to
the storage team to comment)

For these reasons we don't recommend having cleaning disabled, and I
propose
to re-enable it.

It has the following drawbacks:
- The default workflow will require another node boot, thus becoming
several
minutes longer (incl. the CI)
- It will no longer be possible to easily restore a deleted overcloud
node.



I'm trending towards -1, for these exact reasons you list as
drawbacks. There has been no shortage of occurrences of users who have
ended up with accidentally deleted overclouds. These are usually
caused by user error or unintended/unpredictable Heat operations.
Until we have a way to guarantee that Heat will never delete a node,
or Heat is entirely out of the picture for Ironic provisioning, then
I'd prefer that we didn't enable automated cleaning by default.

I believe we had done something with policy.json at one time to
prevent node delete, but I don't recall if that protected from both
user initiated actions and Heat actions. And even that was not enabled
by default.

IMO, we need to keep "safe" defaults. Even if it means manually
documenting that you should clean to prevent the issues you point out
above. The alternative is to have no way to recover deleted nodes by
default.



Well, it's not clear what is "safe" here: protect people who explicitly
delete their stacks or protect people who don't realize that a previous
deployment may screw up their new one in a subtle way.


The latter you can recover from, the former you can't if automated
cleaning is true.


Nor can we recover from 'rm -rf / --no-preserve-root', but it's not a reason to 
disable the 'rm' command :)




It's not just about people who explicitly delete their stacks (whether
intentional or not). There could be user error (non-explicit) or
side-effects triggered by Heat that could cause nodes to get deleted.


If we have problems with Heat, we should fix Heat or stop using it. What you're 
saying is essentially "we prevent ironic from doing the right thing because 
we're using a tool that can invoke 'rm -rf /' at a wrong moment."




You couldn't recover from those scenarios if automated cleaning were
true. Whereas you could always fix a deployment error by opting in to
do an automated clean. Does Ironic keep track of it a node has been
previously cleaned? Could we add a validation to check whether any
nodes might be used in the deployment that were not previously
cleaned?


It's may be possible possible to figure out if a node was ever cleaned. But then 
we'll force operators to invoke cleaning manually, right? It will work, but 
that's another step on the default workflow. Are you okay with it?




Is there a way to only do cleaning right before a node is deployed?  If you're 
about to write a new image to the disk then any data there is forfeit anyway.  
Since the concern is old data on the disk messing up subsequent deploys, it 
doesn't really matter whether you clean it right after it's deleted or right 
before it's deployed, but the latter leaves the data intact for longer in case a 
mistake was made.


If that's not possible then consider this an RFE. :-)


It's a good idea, but it may cause problems with rebuilding instances. Rebuild 
is essentially a re-deploy of the OS, users may not expect the whole disk to be 
wiped..


Also it's unclear whether we want to write additional features to work around 
disabled cleaning.




-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-25 Thread Dmitry Tantsur

On 04/25/2018 04:26 PM, James Slagle wrote:

On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur <dtant...@redhat.com> wrote:

Hi all,

I'd like to restart conversation on enabling node automated cleaning by
default for the undercloud. This process wipes partitioning tables
(optionally, all the data) from overcloud nodes each time they move to
"available" state (i.e. on initial enrolling and after each tear down).

We have had it disabled for a few reasons:
- it was not possible to skip time-consuming wiping if data from disks
- the way our workflows used to work required going between manageable and
available steps several times

However, having cleaning disabled has several issues:
- a configdrive left from a previous deployment may confuse cloud-init
- a bootable partition left from a previous deployment may take precedence
in some BIOS
- an UEFI boot partition left from a previous deployment is likely to
confuse UEFI firmware
- apparently ceph does not work correctly without cleaning (I'll defer to
the storage team to comment)

For these reasons we don't recommend having cleaning disabled, and I propose
to re-enable it.

It has the following drawbacks:
- The default workflow will require another node boot, thus becoming several
minutes longer (incl. the CI)
- It will no longer be possible to easily restore a deleted overcloud node.


I'm trending towards -1, for these exact reasons you list as
drawbacks. There has been no shortage of occurrences of users who have
ended up with accidentally deleted overclouds. These are usually
caused by user error or unintended/unpredictable Heat operations.
Until we have a way to guarantee that Heat will never delete a node,
or Heat is entirely out of the picture for Ironic provisioning, then
I'd prefer that we didn't enable automated cleaning by default.

I believe we had done something with policy.json at one time to
prevent node delete, but I don't recall if that protected from both
user initiated actions and Heat actions. And even that was not enabled
by default.

IMO, we need to keep "safe" defaults. Even if it means manually
documenting that you should clean to prevent the issues you point out
above. The alternative is to have no way to recover deleted nodes by
default.


Well, it's not clear what is "safe" here: protect people who explicitly delete 
their stacks or protect people who don't realize that a previous deployment may 
screw up their new one in a subtle way.










__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-25 Thread Dmitry Tantsur

Hi all,

I'd like to restart conversation on enabling node automated cleaning by default 
for the undercloud. This process wipes partitioning tables (optionally, all the 
data) from overcloud nodes each time they move to "available" state (i.e. on 
initial enrolling and after each tear down).


We have had it disabled for a few reasons:
- it was not possible to skip time-consuming wiping if data from disks
- the way our workflows used to work required going between manageable and 
available steps several times


However, having cleaning disabled has several issues:
- a configdrive left from a previous deployment may confuse cloud-init
- a bootable partition left from a previous deployment may take precedence in 
some BIOS
- an UEFI boot partition left from a previous deployment is likely to confuse 
UEFI firmware
- apparently ceph does not work correctly without cleaning (I'll defer to the 
storage team to comment)


For these reasons we don't recommend having cleaning disabled, and I propose to 
re-enable it.


It has the following drawbacks:
- The default workflow will require another node boot, thus becoming several 
minutes longer (incl. the CI)

- It will no longer be possible to easily restore a deleted overcloud node.

What do you think? If I don't hear principal objections, I'll prepare a patch in 
the coming days.


Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] Re-Reminder on the state of WSME

2018-04-23 Thread Dmitry Tantsur

ironic-inspector is using Flask, and it has been quite nice so far.

On 04/11/2018 12:56 AM, Michael Johnson wrote:

I echo Ben's question about what is the recommended replacement.

Not long ago we were advised to use WSME over the alternatives which
is why Octavia is using the WSME types and pecan extension.

Thanks,
Michael

On Mon, Apr 9, 2018 at 10:16 AM, Ben Nemec  wrote:



On 04/09/2018 07:22 AM, Chris Dent wrote:



A little over two years ago I sent a reminder that WSME is not being
actively maintained:


http://lists.openstack.org/pipermail/openstack-dev/2016-March/088658.html

Today I was reminded of this becasue a random (typo-related)
patchset demonstrated that the tests were no longer passing and
fixing them is enough of a chore that I (at least temporarily)
marked one test as an expected failure.o

  https://review.openstack.org/#/c/559717/

The following projects appear to still use WSME:

  aodh
  blazar
  cloudkitty
  cloudpulse
  cyborg
  glance
  gluon
  iotronic
  ironic
  magnum
  mistral
  mogan
  octavia
  panko
  qinling
  radar
  ranger
  searchlight
  solum
  storyboard
  surveil
  terracotta
  watcher

Most of these are using the 'types' handling in WSME and sometimes
the pecan extension, and not the (potentially broken) Flask
extension, so things should be stable.

However: nobody is working on keeping WSME up to date. It is not a
good long term investment.



What would be the recommended alternative, either for new work or as a
migration path for existing projects?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal: The OpenStack Client Library Guide

2018-04-06 Thread Dmitry Tantsur

Hi Adrian,

Thanks for starting this discussion. I'm adding openstack-sigs ML, please keep 
it in the loop. We in API SIG are interested in providing guidance on not only 
writing OpenStack APIs, but also consuming them. For example, we have merged a 
guideline on consuming API versions: 
http://specs.openstack.org/openstack/api-wg/guidelines/sdk-exposing-microversions.html


More inline.

On 04/06/2018 05:55 AM, Adrian Turjak wrote:

Hello fellow OpenStackers,

As some of you have probably heard me rant, I've been thinking about how
to better solve the problem with various tools that support OpenStack or
are meant to be OpenStack clients/tools which don't always work as
expected by those of us directly in the community.

Mostly around things like auth and variable name conventions, and things
which often there should really be consistency and overlap.

The example that most recently triggered this discussion was how
OpenStackClient (and os-client-config) supports certain elements of
clouds.yaml and ENVVAR config, while Terraform supports it differently.
Both you'd often run on the cli and often both in the same terminal, so
it is always weird when certain auth and scoping values don't work the
same. This is being worked on, but little problems like this an an
ongoing problem.

The proposal, write an authoritative guide/spec on the basics of
implementing a client library or tool for any given language that talks
to OpenStack.

Elements we ought to cover:
- How all the various auth methods in Keystone work, how the whole authn
and authz process works with Keystone, and how to actually use it to do
what you want.


Yes please!


- What common client configuration options exist and how they work
(common variable names, ENVVARs, clouds.yaml), with something like
common ENVVARs documented and a list maintained so there is one
definitive source for what to expect people to be using.


Even bigger YES


- Per project guides on how the API might act that helps facilitate
starting to write code against it beyond just the API reference, and
examples of what to expect. Not exactly a duplicate of the API ref, but
more a 'common pitfalls and confusing elements to be ware of' section
that builds on the API ref of each project.


Oh yeah, esp. what to be mindful of when writing an SDK in a statically typed 
language (I had quite some fun with rust-openstack, I guess Terraform had 
similar issues).




There are likely other things we want to include, and we need to work
out what those are, but ideally this should be a new documentation
focused project which will result in useful guide on what someone needs
to take any programming language, and write a library that works as we
expect it should against OpenStack. Such a guide would also help any
existing libraries ensure they themselves do fully understand and use
the OpenStack auth and service APIs as expected. It should also help to
ensure programmers working across multiple languages and systems have a
much easier time interacting with all the various libraries they might
touch.

A lot of this knowledge exists, but it's hard to parse and not well
documented. We have reference implementations of it all in the likes of
OpenStackClient, Keystoneauth1, and the OpenStackSDK itself (which
os-client-config is now a part of), but what we need is a language
agnostic guide rather than the assumption that people will read the code
of our official projects. Even the API ref itself isn't entirely helpful
since in a lot of cases it only covers the most basic of examples for
each API.

There appears to be interest in something like this, so lets start with
a mailing list discussion, and potentially turn it into something more
official if this leads anywhere useful. :)


Count me in :)



Cheers,
Adrian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-03-23 Thread Dmitry Tantsur

On 03/22/2018 04:39 PM, Sean McGinnis wrote:


That's unfortunate. What we really need is a migration path from the
'pbr' way of doing things to something else. I see three possible
avenues at this point in time:

1. Start using 'sphinx.ext.autosummary'. Apparently this can do similar
   things to 'sphinx-apidoc' but it takes the form of an extension.
   From my brief experiments, the output generated from this is
   radically different and far less comprehensive than what 'sphinx-
   apidoc' generates. However, it supports templating so we could
   probably configure this somehow and add our own special directive
   somewhere like 'openstackdocstheme'
2. Push for the 'sphinx.ext.apidoc' extension I proposed some time back
   against upstream Sphinx [1]. This essentially does what the PBR
   extension does but moves configuration into 'conf.py'. However, this
   is currently held up as I can't adequately explain the differences
   between this and 'sphinx.ext.autosummary' (there's definite overlap
   but I don't understand 'autosummary' well enough to compare them).
3. Modify the upstream jobs that detect the pbr integration and have
   them run 'sphinx-apidoc' before 'sphinx-build'. This is the least
   technically appealing approach as it still leaves us unable to build
   stuff locally and adds yet more "magic" to the gate, but it does let
   us progress.

Try as I may, I don't really have the bandwidth to work on this for
another few weeks so I'd appreciate help from anyone with sufficient
Sphinx-fu to come up with a long-term solution to this issue.

Cheers,
Stephen



I think we could probably go with 1 until and if 2 becomes an option. It does
change output quite a bit.

I played around with 3, but I think we will have enough differences between
projects as to _where_ specifically this generated content needs to be placed
that it will make that approach a little more brittle.



One other things that comes to mind - I think most service projects, if they
are even using this, could probably just drop it. I've found the generated
"API" documentation for service modules to be of very limited use.

That would at least narrow things down to lib projects. So this would still be
an issue for the oslo libs for sure. In that case, you do what that module API
documentation in most cases.


This is also an issue for clients. I would kindly ask people doing this work to 
stop proposing patches that just remove the API reference without any replacement.




But personally, I would encourage service projects to get around this issue by
just not doing it. It would appear that would take care of a large chunk of the
current usage:

http://codesearch.openstack.org/?q=autodoc_index_modules=nope=setup.cfg=


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [charms] [tripleo] [puppet] [fuel] [kolla] [openstack-ansible] [cloudcafe] [magnum] [mogan] [sahara] [shovel] [watcher] [helm] [rally] Heads up: ironic classic drivers deprecation

2018-03-16 Thread Dmitry Tantsur

Hi all,

If you see your project name in the subject that is because a global search 
revived usage of "pxe_ipmitool", "agent_ipmitool" or "pxe_ssh" drivers in the 
non-unit-test context in one or more of your repositories.


The classic drivers, such as pxe_ipmitool, were deprecated in Queens, and we're 
on track with removing them in Rocky. Please read [1] about differences between 
classic drivers and newer hardware types. Please refer to [2] on how to update 
your code.


Finally, the pxe_ssh driver was removed some time ago. Please use the standard 
IPMI driver with the virtualbmc project [3] instead.


Please reach out to the ironic team (here or on #openstack-ironic) if you have 
any questions or need help with the transition.


Dmitry

[1] https://docs.openstack.org/ironic/latest/install/enabling-drivers.html
[2] 
https://docs.openstack.org/ironic/latest/admin/upgrade-to-hardware-types.html
[3] https://github.com/openstack/virtualbmc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] TLS by default

2018-03-15 Thread Dmitry Tantsur

On 03/15/2018 12:51 AM, Julia Kreger wrote:

On Wed, Mar 14, 2018 at 4:52 AM, Dmitry Tantsur <dtant...@redhat.com> wrote:

Just to clarify: only for public endpoints, right? I don't think e.g.
ironic-python-agent can talk to self-signed certificates yet.




For what it is worth, it is possible for IPA to speak to a self signed
certificate, although it requires injecting the signing private CA
certificate into the ramdisk or iso image that is being used. There
are a few other options that can be implemented, but those may also
lower overall security posture.


Yep, that's the problem.

We can quite easily make IPA talk to custom https.

We cannot securely make IPA expose an https endpoint without using virtual media 
(not supported by tripleo, vendor-specific).


We cannot (IIUC) make iPXE use https with custom certificates without rebuilding 
the firmware from source.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Poll: S Release Naming

2018-03-14 Thread Dmitry Tantsur

On 03/14/2018 01:33 PM, Frank Kloeker wrote:

Hi,

it's critical, I would say. They canceled the registration just today [1], but 
there are still copyrights for name parts like  "S-Bahn Halleipzig". I would 
remove it from the voting list to prevent us from trouble. S-Bahn is not really 
a location in Berlin. If it does, S-Bahn is broken ;-)


And it often is /me looks at S42 schedule :D

Actually, I agree, a means of transport is not quite a location.



kind regards

Frank (from Berlin)

[1] https://register.dpma.de/DPMAregister/marke/registerHABM?AKZ=007392194

Am 2018-03-14 13:14, schrieb Jeremy Freudberg:

Hi Dmitry,

According to Wikipedia [0] the trademark was removed. The citation [1]
is actually inaccurate; it was not the final ruling. Regardless [2]
seems to reflect the final result which is that the trademark is
cancelled.

Hope this helps.

[0]
https://en.wikipedia.org/wiki/S-train#Germany,_Austria_and_Switzerland
(end of paragraph
[1]
http://juris.bundespatentgericht.de/cgi-bin/rechtsprechung/document.py?Gericht=bpatg=en=Aktuell=23159=1=323=1.pdf 


[2]
http://www.eurailpress.de/news/bahnbetrieb/single-view/news/bundesgerichtshof-db-verliert-marke-s-bahn.html 



On Wed, Mar 14, 2018 at 7:50 AM, Dmitry Tantsur <dtant...@redhat.com>
wrote:


Hi,

I suspect that S-Bahn may be a protected (copyright, trademark,
whatever) name. Did you have a chance to check it?

On 03/14/2018 12:58 AM, Paul Belanger wrote:


Greetings all,

It is time again to cast your vote for the naming of the S
Release. This time
is little different as we've decided to use a public polling
option over per
user private URLs for voting. This means, everybody should proceed
to use the
following URL to cast their vote:





https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1=8cfdc1f5df5fe4d3 


[1]

Because this is a public poll, results will currently be only
viewable by myself
until the poll closes. Once closed, I'll post the URL making the
results
viewable to everybody. This was done to avoid everybody seeing the
results while
the public poll is running.

The poll will officially end on 2018-03-21 23:59:59[1], and
results will be
posted shortly after.

[1]



http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst 


[2]
---

According to the Release Naming Process, this poll is to determine
the
community preferences for the name of the R release of OpenStack.
It is
possible that the top choice is not viable for legal reasons, so
the second or
later community preference could wind up being the name.

Release Name Criteria

Each release name must start with the letter of the ISO basic
Latin alphabet
following the initial letter of the previous release, starting
with the
initial release of "Austin". After "Z", the next name should start
with
"A" again.

The name must be composed only of the 26 characters of the ISO
basic Latin
alphabet. Names which can be transliterated into this character
set are also
acceptable.

The name must refer to the physical or human geography of the
region
encompassing the location of the OpenStack design summit for the
corresponding release. The exact boundaries of the geographic
region under
consideration must be declared before the opening of nominations,
as part of
the initiation of the selection process.

The name must be a single word with a maximum of 10 characters.
Words that
describe the feature should not be included, so "Foo City" or "Foo
Peak"
would both be eligible as "Foo".

Names which do not meet these criteria but otherwise sound really
cool
should be added to a separate section of the wiki page and the TC
may make
an exception for one or more of them to be considered in the
Condorcet poll.
The naming official is responsible for presenting the list of
exceptional
names for consideration to the TC before the poll opens.

Exact Geographic Region

The Geographic Region from where names for the S release will come
is Berlin

Proposed Names

Spree (a river that flows through the Saxony, Brandenburg and
Berlin states of
Germany)

SBahn (The Berlin S-Bahn is a rapid transit system in and around
Berlin)

Spandau (One of the twelve boroughs of Berlin)

Stein (Steinstraße or "Stein Street" in Berlin, can also be
conveniently
abbreviated as )

Steglitz (a locality in the South Western part of the city)

Springer (Berlin is headquarters of Axel Springer publishing
house)

Staaken (a locality within the Spandau borough)

Schoenholz (A zone in the Niederschönhausen district of Berlin)

Shellhaus (A famous office building)

Suedkreuz ("southern cross" - a railway station in
Tempelhof-Schöneberg)

Schiller (A park in the Mitte borough)

Saatwinkel (The name of a super tiny beach, and its surrounding
neighborhood)
(The adjective form, Saatwinkler is also a really cool
bridge but
that form is too long)

Sonne (Sonnenallee is the name of a large street i

Re: [openstack-dev] [tripleo] Blueprints for Rocky

2018-03-14 Thread Dmitry Tantsur

Hi Alex,

I have two small ironic-related blueprints pending approval:
https://blueprints.launchpad.net/tripleo/+spec/ironic-rescue
https://blueprints.launchpad.net/tripleo/+spec/networking-generic-switch

and one larger:
https://blueprints.launchpad.net/tripleo/+spec/ironic-inspector-overcloud

Could you please check them?

I would also like to talk about possibility to enable cleaning by default in the 
undercloud, but I guess it deserves a separate thread.


On 03/13/2018 02:58 PM, Alex Schultz wrote:

Hey everyone,

So we currently have 63 blueprints for currently targeted for
Rocky[0].  Please make sure that any blueprints you are interested in
delivering have an assignee set and have been approved.  I would like
to have the ones we plan on delivering for Rocky to be updated by
April 3, 2018.  Any blueprints that have not been updated will be
moved out to the next cycle after this date.

Thanks,
-Alex

[0] https://blueprints.launchpad.net/tripleo/rocky

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] TLS by default

2018-03-14 Thread Dmitry Tantsur
Just to clarify: only for public endpoints, right? I don't think e.g. 
ironic-python-agent can talk to self-signed certificates yet.


On 03/14/2018 07:03 AM, Juan Antonio Osorio wrote:

Hello,

As part of the proposed changed by the Security Squad [1], we'd like the 
deployment to use TLS by default.


The first target is to get the undercloud to use it, so a patch has been 
proposed recently [2] [3]. So, just wanted to give a heads up to people.


This should be just fine from a quickstart/testing point of view, since we 
explicitly set the value for autogenerating certificates in the undercloud [4] [5].


Note that there are also plans to change these defaults for the containerized 
undercloud and the overcloud.


BR

[1] https://etherpad.openstack.org/p/tripleo-security-squad
[2] https://review.openstack.org/#/c/552382/
[3] https://review.openstack.org/552781
[4] 
https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/extras-common/defaults/main.yml#L15
[5] 
https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/undercloud-deploy/templates/undercloud.conf.j2#L117

--
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Poll: S Release Naming

2018-03-14 Thread Dmitry Tantsur

On 03/14/2018 10:05 AM, Thierry Carrez wrote:

Jens Harbott wrote:

2018-03-14 9:21 GMT+01:00 Sławomir Kapłoński :

Hi,

Are You sure this link is good? I just tried it and I got info that "Already 
voted" which isn't true in fact :)


Comparing with previous polls, these should be personalized links that
need to be sent out to each voter individually, so I agree that this
looks like a mistake.


We crashed CIVS for the last naming with a private poll sent to all the
Foundation membership, so the TC decided to use public (open) polling
this time around. Anyone with the link can vote, nothing was sent to
each of the voters individually.

The "Already voted" error might be due to CIVS limiting public polling
to one entry per IP, and a colleague of yours already voted... Maybe try
from another IP address ?



I don't think every small company has an unlimited pool of IP addresses.. 
Neither do people working from home from a big internet provider.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Poll: S Release Naming

2018-03-14 Thread Dmitry Tantsur

Hi,

I suspect that S-Bahn may be a protected (copyright, trademark, whatever) name. 
Did you have a chance to check it?


On 03/14/2018 12:58 AM, Paul Belanger wrote:

Greetings all,

It is time again to cast your vote for the naming of the S Release. This time
is little different as we've decided to use a public polling option over per
user private URLs for voting. This means, everybody should proceed to use the
following URL to cast their vote:

   
https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1=8cfdc1f5df5fe4d3

Because this is a public poll, results will currently be only viewable by myself
until the poll closes. Once closed, I'll post the URL making the results
viewable to everybody. This was done to avoid everybody seeing the results while
the public poll is running.

The poll will officially end on 2018-03-21 23:59:59[1], and results will be
posted shortly after.

[1] 
http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst
---

According to the Release Naming Process, this poll is to determine the
community preferences for the name of the R release of OpenStack. It is
possible that the top choice is not viable for legal reasons, so the second or
later community preference could wind up being the name.

Release Name Criteria

Each release name must start with the letter of the ISO basic Latin alphabet
following the initial letter of the previous release, starting with the
initial release of "Austin". After "Z", the next name should start with
"A" again.

The name must be composed only of the 26 characters of the ISO basic Latin
alphabet. Names which can be transliterated into this character set are also
acceptable.

The name must refer to the physical or human geography of the region
encompassing the location of the OpenStack design summit for the
corresponding release. The exact boundaries of the geographic region under
consideration must be declared before the opening of nominations, as part of
the initiation of the selection process.

The name must be a single word with a maximum of 10 characters. Words that
describe the feature should not be included, so "Foo City" or "Foo Peak"
would both be eligible as "Foo".

Names which do not meet these criteria but otherwise sound really cool
should be added to a separate section of the wiki page and the TC may make
an exception for one or more of them to be considered in the Condorcet poll.
The naming official is responsible for presenting the list of exceptional
names for consideration to the TC before the poll opens.

Exact Geographic Region

The Geographic Region from where names for the S release will come is Berlin

Proposed Names

Spree (a river that flows through the Saxony, Brandenburg and Berlin states of
Germany)

SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin)

Spandau (One of the twelve boroughs of Berlin)

Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently
abbreviated as )

Steglitz (a locality in the South Western part of the city)

Springer (Berlin is headquarters of Axel Springer publishing house)

Staaken (a locality within the Spandau borough)

Schoenholz (A zone in the Niederschönhausen district of Berlin)

Shellhaus (A famous office building)

Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg)

Schiller (A park in the Mitte borough)

Saatwinkel (The name of a super tiny beach, and its surrounding neighborhood)
(The adjective form, Saatwinkler is also a really cool bridge but
that form is too long)

Sonne (Sonnenallee is the name of a large street in Berlin crossing the former
wall, also translates as "sun")

Savigny (Common place in City-West)

Soorstreet (Street in Berlin restrict Charlottenburg)

Solar (Skybar in Berlin)

See (Seestraße or "See Street" in Berlin)

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] PTG Summary

2018-03-12 Thread Dmitry Tantsur

Inline.

On 03/12/2018 01:00 PM, Tim Bell wrote:

My worry with re-running the burn-in every time we do cleaning is for resource 
utilisation. When the machines are running the burn-in, they're not doing 
useful physics so I would want to minimise the number of times this is run over 
the life time of a machine.


You only have to run it every time if you put the step into automated cleaning. 
However, we also have manual cleaning, which is run explicitly.




It may be possible to do something like the burn in with a dedicated set of 
steps but still use the cleaning state machine.


Yep, this is what manual cleaning is about: an operator explicitly requests it 
with a given set of steps. See 
https://docs.openstack.org/ironic/latest/admin/cleaning.html#manual-cleaning




Having a cleaning step set (i.e. burn-in means 
cpuburn,memtest,badblocks,benchmark) would make it more friendly for the 
administrator. Similarly, retirement could be done with additional steps such 
as reset2factory.


++

We may even add a reference set of clean steps to IPA, but we'll need your help 
implementing them. I am personally not familiar with how to do burn-in right 
(though IIRC Julia is).




Tim

-Original Message-
From: Dmitry Tantsur <dtant...@redhat.com>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: Monday, 12 March 2018 at 12:47
To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [ironic] PTG Summary

 Hi Tim,
 
 Thanks for the information.
 
 I personally don't see problems with cleaning running weeks, when needed. What

 I'd avoid is replicating the same cleaning machinery but with a different 
name.
 I think we should try to make cleaning work for this case instead.
 
 Dmitry
 
 On 03/12/2018 12:33 PM, Tim Bell wrote:

 > Julia,
 >
 > A basic summary of CERN does burn-in is at 
http://openstack-in-production.blogspot.ch/2018/03/hardware-burn-in-in-cern-datacenter.html
 >
 > Given that the burn in takes weeks to run, we'd see it as a different 
step to cleaning (with some parts in common such as firmware upgrades to latest 
levels)
 >
 > Tim
 >
 > -Original Message-
 > From: Julia Kreger <juliaashleykre...@gmail.com>
 > Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
 > Date: Thursday, 8 March 2018 at 22:10
 > To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
 > Subject: [openstack-dev] [ironic] PTG Summary
 >
 > ...
 >  Cleaning - Burn-in
 >
 >  As part of discussing cleaning changes, we discussed supporting a
 >  "burn-in" mode where hardware could be left to run load, memory, or
 >  other tests for a period of time. We did not have consensus on a
 >  generic solution, other than that this should likely involve
 >  clean-steps that we already have, and maybe another entry point into
 >  cleaning. Since we didn't really have consensus on use cases, we
 >  decided the logical thing was to write them down, and then go from
 >  there.
 >
 >  Action Items:
 >  * Community members to document varying burn-in use cases for
 >  hardware, as they may vary based upon industry.
 >  * Community to try and come up with a couple example clean-steps.
 >
 >
 >
 > 
__
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >
 
 
 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] PTG Summary

2018-03-12 Thread Dmitry Tantsur

Hi Tim,

Thanks for the information.

I personally don't see problems with cleaning running weeks, when needed. What 
I'd avoid is replicating the same cleaning machinery but with a different name. 
I think we should try to make cleaning work for this case instead.


Dmitry

On 03/12/2018 12:33 PM, Tim Bell wrote:

Julia,

A basic summary of CERN does burn-in is at 
http://openstack-in-production.blogspot.ch/2018/03/hardware-burn-in-in-cern-datacenter.html

Given that the burn in takes weeks to run, we'd see it as a different step to 
cleaning (with some parts in common such as firmware upgrades to latest levels)

Tim

-Original Message-
From: Julia Kreger 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 8 March 2018 at 22:10
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [ironic] PTG Summary

...
 Cleaning - Burn-in
 
 As part of discussing cleaning changes, we discussed supporting a

 "burn-in" mode where hardware could be left to run load, memory, or
 other tests for a period of time. We did not have consensus on a
 generic solution, other than that this should likely involve
 clean-steps that we already have, and maybe another entry point into
 cleaning. Since we didn't really have consensus on use cases, we
 decided the logical thing was to write them down, and then go from
 there.
 
 Action Items:

 * Community members to document varying burn-in use cases for
 hardware, as they may vary based upon industry.
 * Community to try and come up with a couple example clean-steps.
 
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] heads-up: classic drivers deprecation and future removal

2018-03-06 Thread Dmitry Tantsur

Hi all,

As you may already know, we have deprecated classic drivers in the Queens 
release. We don't have specific removal plans yet. But according to the 
deprecation policy we may remove them at any time after May 1st, which will be 
half way to Rocky milestone 2. Personally, I'd like to do it around then.


The `online_data_migrations` script will handle migrating nodes, if all required 
hardware interfaces and types are enabled before the upgrade to Queens. 
Otherwise, check the documentation [1] on how to update your nodes.


Dmitry

[1] 
https://docs.openstack.org/ironic/latest/admin/upgrade-to-hardware-types.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Queens RC1 was released!

2018-03-05 Thread Dmitry Tantsur

Hi!

A reminder that https://review.openstack.org/534842 is quite important, as it 
will enable upgrade to hardware types from classic drivers. The latter will be 
removed in one of the future releases. As it applies to online_data_migrations 
it will not be run after you upgrade to Queens without this patch.


On 03/05/2018 09:17 AM, Emilien Macchi wrote:

TripleO team is proud to announce that we released Queens RC1!

Some numbers:
210 bugs fixed
7 features implemented

In Pike RC1:
138 bug fixed
8 features implemented

In Ocata RC1:
62 bug fixed
7 features implemented

In Newton RC1:
51 bug fixed
11 features implemented


Unless we find a need to do it, we won't release RC2, but we'll see how it works 
during the next days.

We encourage people to backport their bugfixes to stable/queens.
Also all work related to FFU & upgrades is moving to rocky-1 but we expect the 
patches to be backported into stable/queens.


Reminder: backports to stable/queens should be done by patches authors to help 
PTL & TripleO stable maintainers.


Thanks and nice work everyone!
--
Emilien Macchi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Polling for new meeting time?

2018-03-05 Thread Dmitry Tantsur

On 03/04/2018 09:46 PM, Zhipeng Huang wrote:

Thx Julia,

Another option is instead of changing meeting time, you could establish a 
tick-tock meeting, for example odd weeks for US-Euro friendly times and even 
weeks for US-Asia friendly times.


We tried that roughly two years ago, and it did not work because very few people 
showed up on the APAC time. I think the goal of this poll is to figure out how 
many people would show up now.




On Mar 4, 2018 7:01 PM, "Julia Kreger" > wrote:


Greetings everyone!

As our community composition has shifted to be more global, the
question has arisen if we should consider shifting the meeting to be
more friendly to some of our contributors in the APAC time zones.
Alternatively this may involve changing our processes to better plan
and communicate, but the first step is to understand our overlaps and
what might work well for everyone.

I have created a doodle poll, from which I would like understand what
times would ideally work, and from there we can determine if there is
a better time to meet.

The poll can be found at: https://doodle.com/poll/6kuwixpkkhbwsibk


Please don't feel the need to select times that would be burdensome to
yourself. This is only to gather information as to the time of day
that would be ideal for everyone. All times are set as UTC on the
poll.

Once we have collected some data, we should expect to discuss during
our meeting on the 12th.

Thanks everyone!

-Julia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Polling for new meeting time?

2018-03-05 Thread Dmitry Tantsur

On 03/04/2018 07:00 PM, Julia Kreger wrote:

Greetings everyone!

As our community composition has shifted to be more global, the
question has arisen if we should consider shifting the meeting to be
more friendly to some of our contributors in the APAC time zones.
Alternatively this may involve changing our processes to better plan
and communicate, but the first step is to understand our overlaps and
what might work well for everyone.

I have created a doodle poll, from which I would like understand what
times would ideally work, and from there we can determine if there is
a better time to meet.

The poll can be found at: https://doodle.com/poll/6kuwixpkkhbwsibk

Please don't feel the need to select times that would be burdensome to
yourself. This is only to gather information as to the time of day
that would be ideal for everyone. All times are set as UTC on the
poll.


Are you sure? I'm asking because the last time I checked Doodle created all 
polls in some specific time, defaulting to your local time. Then for each 
participant it converts the times to their local time. E.g. for me the time span 
is 1am to to 12am Berlin time (UTC+1), is it what you expected?




Once we have collected some data, we should expect to discuss during
our meeting on the 12th.

Thanks everyone!

-Julia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Stepping down from Ironic core

2018-02-23 Thread Dmitry Tantsur

Hi Vasyl,

I'm sad to hear it :( Thank YOU for everything! The only thing in the world more 
valuable than your contributions to ironic is the joy of hanging out with you at 
the events :) Good luck and do not disappear.


Dmitry

On 02/23/2018 10:02 AM, Vasyl Saienko wrote:

Hey Ironic community!

Unfortunately I don't work on Ironic as much as I used to any more, so i'm
stepping down from core reviewers.

So, thanks for everything everyone, it's been great to work with you
all for all these years!!!


Sincerely,
Vasyl Saienko


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStackClient][Security][ec2-api][heat][horizon][ironic][kuryr][magnum][manila][masakari][neutron][senlin][shade][solum][swift][tacker][tricircle][vitrage][watcher][winstackers]

2018-02-08 Thread Dmitry Tantsur

On 02/07/2018 10:31 PM, Tony Breeds wrote:

On Thu, Feb 08, 2018 at 08:18:37AM +1100, Tony Breeds wrote:


Okay  It's safe to ignore then ;P  We should probably remove it from
projects.txt if it really is empty  I'll propose that.


Oh my bad, ironic-python-agent-builder was included as it's included as
an ironic project[1] NOT because it;s listed in projects.txt.  Given
that it's clearly not for me to remove anything.

Having said that if the project hasn't had any updates at all since it's
creation in July 2017 perhaps it's no longer needed and could be
removed?


We do plan to use it, we just never had time to populate it :(



Yours Tony.

[1] 
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml#n1539



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStackClient][Security][ec2-api][heat][horizon][ironic][kuryr][magnum][manila][masakari][neutron][senlin][shade][solum][swift][tacker][tricircle][vitrage][watcher][winstackers]

2018-02-07 Thread Dmitry Tantsur

Hi,

On 02/07/2018 05:23 PM, Matthew Thode wrote:

Hi all,

it looks like some of your projects may need to cut a queens
branch/release.  Is there anything we can do to move it along?


Review patches? Make the gate work faster? :)

The Ironic team is working on it, we expect stable/queens requests to come later 
today or early tomorrow. Two more comments inline.




The following is the list I'm working off of (will be updated as
projects release)
https://gist.github.com/prometheanfire/9449355352d97207aa85172cd9ef4b9f

As of right now it's as follows.

# Projects without team or release model could not be found in 
openstack/releases for queens
openstack/almanach
openstack/compute-hyperv
openstack/ekko
openstack/gce-api
openstack/glare
openstack/ironic-staging-drivers


I don't think non-official projects get tracked via openstack/releases.


openstack/kosmos
openstack/mixmatch
openstack/mogan
openstack/nemesis
openstack/networking-dpm
openstack/networking-l2gw
openstack/networking-powervm
openstack/nova-dpm
openstack/nova-lxd
openstack/nova-powervm
openstack/os-xenapi
openstack/python-cratonclient
openstack/python-glareclient
openstack/python-kingbirdclient
openstack/python-moganclient
openstack/python-oneviewclient
openstack/python-valenceclient
openstack/swauth
openstack/tap-as-a-service
openstack/trio2o
openstack/valence
openstack/vmware-nsx
openstack/vmware-nsxlib

# Projects missing a release/branch for queens
openstackclientOpenStackClient
anchor Security
ec2-apiec2-api
django_openstack_auth  horizon
horizon-cisco-ui   horizon
bifrostironic > 
ironic-python-agent-builderironic


This one is empty and will not be released for Queens.


magnum magnum
magnum-ui  magnum
manila-image-elements  manila
masakari   masakari
masakari-monitors  masakari
python-masakariclient  masakari
os-service-types   shade
tacker tacker # I think 
this one is released
tacker-horizon tacker # but not 
this one

# Repos with type: horizon-plugin  (typically release a little later)
manila-ui  manila
neutron-vpnaas-dashboard   neutron
senlin-dashboard   senlin
solum-dashboardsolum
watcher-dashboard  watcher

# Repos with type: other
heat-agentsheat
ironic-python-agentironic
kuryr-kubernetes   kuryr
neutron-vpnaas neutron
networking-hyperv  winstackers

# Repos with type: service
ironic ironic
swift  swift
tricircle  tricircle
vitragevitrage
watcherwatcher



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Dublin PTG proposed track schedule

2018-02-07 Thread Dmitry Tantsur

On 02/07/2018 12:28 PM, Thierry Carrez wrote:

Lance Bragstad wrote:

On 02/05/2018 09:34 AM, Thierry Carrez wrote:

Lance Bragstad wrote:

Colleen started a thread asking if there was a need for a baremetal/vm
group session [0], which generated quite a bit of positive response. Is
there still a possibility of fitting that in on either Monday or
Tuesday? The group is usually pretty large.

[0]
http://lists.openstack.org/pipermail/openstack-dev/2018-January/126743.html

Yes, we can still allocate a 80-people room or a 30-people one. Let me
know if you prefer Monday, Tuesday or both.

Awesome - we're collecting topics in an etherpad, but we're likely only
going to get to three or four of them [0] [1]. We can work those topics
into two sessions. One on Monday and one on Tuesday, just to break
things up in case other things are happening those days that people want
to get to.


Looking at that etherpad, do you need the room allocated for all the day
on Monday/Tuesday ? It's doable, but if you already know you won't do
anything on Monday afternoon, we can make that room reservable then.

What should the track name be ? Some people suggested "cross-project
identity integration" instead of baremetal-vm.


++ please do not use bm-vm, it confuses everyone not involved from the beginning





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating Hironori Shiina for ironic-core

2018-02-06 Thread Dmitry Tantsur

+1

On 02/05/2018 07:12 PM, Julia Kreger wrote:

I would like to nominate Hironori Shiina to ironic-core. He has been
working in the ironic community for some time, and has been helping
over the past several cycles with more complex features. He has
demonstrated an understanding of Ironic's code base, mechanics, and
overall community style. His review statistics are also extremely
solid. I personally have a great deal of trust in his reviews.

I believe he would make a great addition to our team.

Thanks,

-Julia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] driver composition: help needed from vendors

2018-02-05 Thread Dmitry Tantsur

Hi everyone,

We have landed changes deprecating classic drivers, and we may remove classic 
drivers as early as end of Rocky. I would like to ask those who maintain drivers 
for ironic a few favors:


1. We have landed a database migration [1] to change nodes from classic drivers 
to hardware types automatically. Please check the mapping [2] for your drivers 
for correctness.


2. Please update your documentation pages to primarily use hardware types. 
You're free to still mention classic drivers or remove the information about 
them completely.


3. Please update your CI to use hardware types on master (queens and newer). 
Please make sure that the coverage does not suffer. For example, if you used to 
test pxe_foo and agent_foo, the updates CI should test "foo" hardware type with 
"iscsi" and "direct" deploy interfaces.


Please let us know if you have any concerns.

Thanks,
Dmitry

[1] https://review.openstack.org/534373
[2] https://review.openstack.org/539589

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][release] FFE for sushy bug-fix release

2018-02-05 Thread Dmitry Tantsur

Hi all,

I'm requesting an exception to proceed with the release of the sushy library. To 
my best knowledge, the library is only consumed by ironic and at least one other 
vendor support library which is outside of the official governance. The release 
request is [1]. It addresses a last minute bug in the authentication code, 
without it authentication will not work in some cases.


Thanks,
Dmitry

[1] https://review.openstack.org/540824

P.S.
We really need a feature freeze period for libraries to avoid this.. But it 
cannot be introduced with the current library release freeze. Another PTG topic? :)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Not running for PTL this cycle

2018-02-04 Thread Dmitry Tantsur

Hi all,

I guess it's quite obvious already, but I won't be running for PTL position this 
time. It's been a challenging and interesting journey, I've learned a lot, and I 
believe we've achieved a lot together. Now I'd like to get back to calm waters 
and allow others to driver the project forward :) Of course I'm not going 
anywhere far, and I'm ready to help whoever gets this chair with their new 
challenge.


Now a small request: please leave me anonymous feedback at 
https://goo.gl/forms/810u3j8Yh2fymUMG2 that'll help me to improve further :)


Thank you all,
Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] [glance] [ironic] [neutron] [tacker] [tc] policy in code goal

2018-01-31 Thread Dmitry Tantsur

On 01/31/2018 06:23 PM, Lance Bragstad wrote:



On 01/31/2018 11:20 AM, Dmitry Tantsur wrote:

Hi!

On 01/31/2018 06:16 PM, Lance Bragstad wrote:

Hey folks,

The tracking tool for the policy-and-docs-in-code goal for Queens [0]
lists a couple projects remaining for the goal [1].  I wanted to start a
discussion with said projects to see how we want to go about the work in
the future, we have a couple of options.


I was under assumption that ironic has finished this goal. I'll wait
for pas-ha to weigh in, but I was not planning any activities for it.

It looks like there is still an unmerged patch tagged with the
policy-and-docs-in-code topic [0].

[0]
https://review.openstack.org/#/q/is:open+topic:policy-and-docs-in-code+project:openstack/ironic


But is it required? We've marked the goal as done already, and pas-ha is no 
longer working on it AFAIK.






I can update the document the goal document saying the work is still
underway for those projects. We can also set aside time at the PTG to
finish up that work if people would like more help. This might be
something we can leverage the baremetal/vm room for if we get enough
interest [2].


Mmm, the scope of the bm/vm room is already unclear to me, this may
add to the confusion. Maybe just a "Goals workroom"?



I want to get the discussion rolling if there is something we need to
coordinate for the PTG. Thoughts?

Thanks,

Lance


[0] https://governance.openstack.org/tc/goals/queens/policy-in-code.html
[1] https://www.lbragstad.com/policy-burndown/
[2]
http://lists.openstack.org/pipermail/openstack-dev/2018-January/126743.html





__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG?

2018-01-31 Thread Dmitry Tantsur

On 01/31/2018 06:15 PM, Matt Riedemann wrote:

On 1/30/2018 9:33 AM, Colleen Murphy wrote:

At the last PTG we had some time on Monday and Tuesday for
cross-project discussions related to baremetal and VM management. We
don't currently have that on the schedule for this PTG. There is still
some free time available that we can ask for[1]. Should we try to
schedule some time for this?

 From a keystone perspective, some things we'd like to talk about with
the BM/VM teams are:

- Unified limits[2]: we now have a basic REST API for registering
limits in keystone. Next steps are building out libraries that can
consume this API and calculate quota usage and limit allocation, and
developing models for quotas in project hierarchies. Input from other
projects is essential here.
- RBAC: we've introduced "system scope"[3] to fix the admin-ness
problem, and we'd like to guide other projects through the migration.
- Application credentials[4]: this main part of this work is largely
done, next steps are implementing better access control for it, which
is largely just a keystone team problem but we could also use this
time for feedback on the implementation so far

There's likely some non-keystone-related things that might be at home
in a dedicated BM/VM room too. Do we want to have a dedicated day or
two for these projects? Or perhaps not dedicated days, but
planned-in-advance meeting time? Or should we wait and schedule it
ad-hoc if we feel like we need it?

Colleen

[1] 
https://docs.google.com/spreadsheets/d/e/2PACX-1vRmqAAQZA1rIzlNJpVp-X60-z6jMn_95BKWtf0csGT9LkDharY-mppI25KjiuRasmK413MxXcoSU7ki/pubhtml?gid=1374855307=true 

[2] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/limits-api.html 

[3] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html 

[4] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/application-credentials.html 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



These all seem like good topics for big cross-project issues.

I've never liked the "BM/VM" platform naming thing, it seems to imply that the 
only things one needs to care about for these discussions is if they work on or 
use nova and ironic, and that's generally not the case.


++ can we please rename it? I think people (myself included) will expect 
specifically something related to bare metal instances co-existing with virtual 
ones (e.g. scheduling or networking concerns). Which is also a great topic, but 
it does not seem to be present on the list.




So if you do have a session about this really cross-project platform-specific 
stuff, can we at least not call it "BM/VM"? Plus, "BM" always makes me think of 
something I'd rather not see in a room with other people.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] [glance] [ironic] [neutron] [tacker] [tc] policy in code goal

2018-01-31 Thread Dmitry Tantsur

Hi!

On 01/31/2018 06:16 PM, Lance Bragstad wrote:

Hey folks,

The tracking tool for the policy-and-docs-in-code goal for Queens [0]
lists a couple projects remaining for the goal [1].  I wanted to start a
discussion with said projects to see how we want to go about the work in
the future, we have a couple of options.


I was under assumption that ironic has finished this goal. I'll wait for pas-ha 
to weigh in, but I was not planning any activities for it.




I can update the document the goal document saying the work is still
underway for those projects. We can also set aside time at the PTG to
finish up that work if people would like more help. This might be
something we can leverage the baremetal/vm room for if we get enough
interest [2].


Mmm, the scope of the bm/vm room is already unclear to me, this may add to the 
confusion. Maybe just a "Goals workroom"?




I want to get the discussion rolling if there is something we need to
coordinate for the PTG. Thoughts?

Thanks,

Lance


[0] https://governance.openstack.org/tc/goals/queens/policy-in-code.html
[1] https://www.lbragstad.com/policy-burndown/
[2]
http://lists.openstack.org/pipermail/openstack-dev/2018-January/126743.html




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] ansible deploy, playbooks and containers?

2018-01-31 Thread Dmitry Tantsur

Hi all,

I'd like to discuss one idea that came to me while trying to use the ansible 
deploy in TripleO.


The ansible deploy interface is all about customization. Meaning, we expect 
people to modify the playbooks. I have two concerns with it:


1. Nearly any additions requires a full copy of the playbooks. Which will make 
the operators miss any future updates to the shipped version (e.g. from packages).


2. We require operators to modify playbooks on the hard drive in a location, 
available to ironic-conductor. This is inconvenient when there are many 
conductors and quite hairy with containers.


So, what came to my mind is:

1. Let us maybe define some hook points in our playbooks and allow operators to 
overwrite only them? I'm not sure how it's going to look, so suggestions are 
welcome.


2. Let us maybe allow a swift or http(s) URL for the playbooks_path 
configuration? That will be a link to a tarball that will be unpacked by ironic 
to a temporary location before executing.


What do you think?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ironic] Tagging newton EOL

2018-01-31 Thread Dmitry Tantsur

Hi!

Ironic is ready for newton EOL.

On 01/31/2018 05:17 AM, Tony Breeds wrote:

Hi All,
 When we tagged newton EOL in October there were in-flight reviews
for nova and ironic that needed to land before we could EOL them.  That
work completed but I dropped the ball.  So can we tag those last 2
repos?

As in October a member of the infra team needs to do this *or* I can
be added to Project Bootstrappers[1] for long enough to do this.

# EOL repos belonging to ironic
eol_branch.sh -- stable/newton newton-eol openstack/ironic
# EOL repos belonging to nova
eol_branch.sh -- stable/newton newton-eol openstack/nova

Yours Tony.

[1] https://review.openstack.org/#/admin/groups/26,members




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] FFE - classic drivers deprecation

2018-01-23 Thread Dmitry Tantsur

Hi all,

I'm writing to request an FFE for the classic drivers deprecation work [1][2]. 
This is a part of the driver composition reform [3] - the effort started in 
Ocata to revamp bare metal drivers.


The following changes are in scope of this FFE:
1. Provide an automatic migration to hardware types as part of 'ironic-dbsync 
online_data_migrations'

2. Update the CI to use hardware types
3. Issue a deprecation warning when loading classic drivers, and deprecate 
enabled_drivers option.


Finishing it in Queens will allow us to stick to our schedule (outlined in [1]) 
to remove classic drivers in Rocky. Keeping two methods of loading drivers is a 
maintenance burden. Even worse, two sets of mostly equivalent drivers confuse 
users, and the confusion well increase as we introduce features (like rescue) 
that are only available for nodes using the new-style drivers.


The downside of this work is that it introduces a non-trivial data migration 
close to the end of the cycle. Thus, it is designed [1][2] to not fail if the 
migration cannot fully succeed due to environmental reasons.


rloo and stendulker were so kind to agree to review this work during the feature 
freeze window, if it gets an exception.


Dmitry

[1] 
http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/classic-drivers-future.html

[2] https://review.openstack.org/536298
[3] 
http://specs.openstack.org/openstack/ironic-specs/specs/7.0/driver-composition-reform.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] FFE request for node rescue feature

2018-01-23 Thread Dmitry Tantsur
I'm +1 on this, because the feature has been proposed for a while (has changed 
the contributor group at least once) and is needed for feature parity with 
virtual machines in nova.


On 01/23/2018 06:56 AM, Shivanand Tendulker wrote:

Hi

The rescue feature [1] is an high priority for ironic in Queens. The spec for 
the same was merged in Newton. This feature is necessary for users that lose 
regular access to their machine (e.g. lost passwords).


Landing node rescue feature late in the cycle will lead to less time being 
available for testing, with a risk that the feature being released with defects. 
The code changes are fairly isolated from existing code to ensure it does not 
cause any regression. The Ironic side rescue code patches are all in review [2], 
and are now are getting positive reviews or minor negative feedback.


dtantsur and TheJulia have kindly agreed to review the same during the FFE 
window.

[1] 
https://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/implement-rescue-mode.html
[2] 
https://review.openstack.org/#/q/topic:bug/1526449+(status:open+AND+project:openstack/ironic)


Thanks and Regards,
Shiv (stendulker)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] FFE request for node traits

2018-01-23 Thread Dmitry Tantsur
+1 on keeping moving forward with it. that's important for future nova work, as 
well as our deploy steps work.


On 01/22/2018 10:11 PM, Mark Goddard wrote:
The node traits feature [1] is an essential priority for ironic in Queens, and 
is an important step in the continuing evolution of scheduling enabled by the 
placement API. Traits will allow us to move away from capability-based 
scheduling. Capabilities have several limitations for scheduling including 
depending on filters in nova-scheduler rather than allowing placement to select 
matching hosts. Several upcoming features depend on traits [2].


Landing node traits late in the cycle will lead to less time being available for 
testing, with a risk that the feature is release with defects. There are changes 
at most major levels in the code except the drivers, but these are for the most 
part fairly isolated from existing code. The current issues with the grenade CI 
job mean that upgrade code paths are not being exercised frequently, and could 
lead to additional test/bug fix load on the team later in the cycle. The node 
traits code patches are all in review [3], and are now generally getting 
positive reviews or minor negative feedback.


rloo and TheJulia have kindly offered to review during the FFE window.

[1] 
http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/node-traits.html
[2] 
https://review.openstack.org/#/c/504952/7/specs/approved/config-template-traits.rst

[3] https://review.openstack.org/#/q/topic:bug/1722194+(status:open)

Thanks,
Mark (mgoddard)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Deadlines, feature freeze and exceptions

2018-01-22 Thread Dmitry Tantsur

Hi all!

We're near the end of the cycle. Here are some important dates to be mindful of:

Thu, Jan 25th - final Queens releases of python-ironicclient and 
python-ironic-inspector-client. Any features that land after that point will get 
to Rocky, no exceptions. Yes, even if the API itself lands in ironic in Queens.


Thu, Jan 25th - begin of the feature freeze for ironic and many other projects. 
No features should land after that date without getting a formal exception first 
- please pay attention when approving the patches. A procedure for a feature 
freeze exception is outlined below.


Fri, Feb 2nd - hard feature freeze. All features with an exception must land by 
this point. Starting with Monday and until the branching we only land bug fixes 
and documentation updates to master.


Thu, Feb 8th - final feature releases for all remaining projects and creation of 
stable/queens. At this point the feature freeze is lifted and master is opened 
for Rocky development.


Now, how to request a feature freeze exception. Please:

* Send a email to this mailing list, with [ironic] and FFE in its subject.
* Outline the reason why you think the feature should go to Queens, and what are 
the downsides. Keep in mind that no API additions will get client support after 
this Thursday.

* Evaluate and explain the risks of landing the feature so late in the cycle.
* Finally, please find at least two cores that agree to review your changes 
during the feature freeze window, and include their names. This is important, we 
don't need FFEs that won't get reviewed.


Happy hacking,
Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] FFE - Requesting FFE for Routed Networks support.

2018-01-22 Thread Dmitry Tantsur
This FFE was approved on today's meeting. Please note that the deadline to 
merging it is Fri, Feb 2nd.


On 01/17/2018 10:54 AM, Harald Jensås wrote:

Requesting FFE for Routed Network support in networking-baremetal.
---


# Pros
--
With the patches up for review[7] we have a working ml2 agent;
__depends on neutron fix__; and mechanism driver combination that
enables support to bind ports on neutron routed networks.

Specifically we report the bridge_mappings data to neutron, which
enable the _find_candidate_subnets() method in neutron ipam[1] to
succeed in finding a candidate subnet available to the ironic node when
ports on routed segments are bound.

This functionality will allow users to take advantage of the
functionality added in DHCP Agent[2] which enables the DHCP agent to
service other subnets on the network via DHCP relay. For Ironic this
means we can support deploying nodes on a remote L3 network, e.g
different datacenter or different rack/rack-row.



# Cons
--
Integration with placement does not currently work.

Neutron uses Nova host-aggregates in combination with Placement.
Specifically hosts are added to a host-aggregate for segments based on
SEGMENT_HOST_MAPPING. Ironic nodes cannot currently be added to host-
aggregates in Nova. Because of this the following will appear in the
neutron logs when ironic-neutron agent is started:
RESP BODY: {"itemNotFound": {"message": "Compute host  could not be found.", "code": 404}}

Also the placement api cannot be used to find good candidate ironic
nodes with a baremetal port on the correct segment. This will have to be worked 
around by the operator via capabilities and flavor properties or manual 
additions to resource providers in placement.

Depending on the direction of other projects, neutron and nova, the way
placement will finally work is not certain.

Either the nova work [3] and [4], or a neutron change to use placement
only or a fallback to placement in neutron would be possible. In either
case there should be no need to change the networking-baremetal agent
or mechanism driver.


# Risks
---
Unless this bug[5] is fixed we might break the current baremetal
mechanism driver functionality. I have proposed a patch[6] to neutron
that fix the issue. In case no fix lands for this neutron bug soon we
should probably push these changes to Rocky.


# Core reviewers

Julia Kreger, Sam Betts




[1] https://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/ip
am_backend_mixin.py#n697
[2] https://review.openstack.org/#/c/468744/
[3] https://review.openstack.org/#/c/421009/
[4] https://review.openstack.org/#/c/421011/
[5] https://bugs.launchpad.net/neutron/+bug/1743579
[6] https://review.openstack.org/#/c/534449/
[7] https://review.openstack.org/#/q/project:openstack/networking-barem
etal





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Remove in-tree policy and config?

2018-01-22 Thread Dmitry Tantsur

+1

I would really hate to not have a place to reference people to, but the 
documentation now provides such place. Keeping the examples up-to-date is quite 
annoying, I'm all for dropping them.


On 01/22/2018 01:45 PM, John Garbutt wrote:

Hi,

While I was looking at the traits work, I noticed we still have policy and 
config in tree for ironic and ironic inspector:


http://git.openstack.org/cgit/openstack/ironic/tree/etc/ironic/policy.json.sample
http://git.openstack.org/cgit/openstack/ironic/tree/etc/ironic/ironic.conf.sample
http://git.openstack.org/cgit/openstack/ironic/tree/etc/ironic/policy.json

And in a similar way:
http://git.openstack.org/cgit/openstack/ironic-inspector/tree/policy.yaml.sample
http://git.openstack.org/cgit/openstack/ironic-inspector/tree/example.conf

There is an argument that says we shouldn't force operators to build a full 
environment to generate these, but this has been somewhat superseded by us 
having good docs:


https://docs.openstack.org/ironic/latest/configuration/sample-config.html
https://docs.openstack.org/ironic/latest/configuration/sample-policy.html
https://docs.openstack.org/ironic-inspector/latest/configuration/sample-config.html
https://docs.openstack.org/ironic-inspector/latest/configuration/sample-policy.html

It could look something like this (but with the tests working...):
https://review.openstack.org/#/c/536349

What do you all think?

Thanks,
johnthetubaguy


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Lib branching for stable/queens

2018-01-22 Thread Dmitry Tantsur

Hi!

Unfortunately, it seems that we've discovered a critical issue in one of our 
libs (sushy) right after branching :( What's the procedure for emergency fixes 
to stable/queens right now?


On 01/19/2018 10:15 PM, Sean McGinnis wrote:

Happy Friday all.

Now that we are past the non-client lib freeze, we will need to have
stable/queens branches for those libs. For all libraries that did not miss the
freeze, I will be proposing the patches to get those stable branches created.

This should have been enforced as part of the last deliverable request, but I
don't think we had quite everything in place for branch creation. Going forward
as we do the client library releases and then the service releases, please make
sure your patch requesting the release includes creating the stable/queens
branching.

If there is any reason for me to hold off on this for a library your team
manages, please let me know ASAP.


Upcoming service project branching
==

I mentioned this in the countdown email, but to increase the odds that someone
actually sees it - if your project follows the cycle-with-milestones release
model, please check membership of your $project-release group. The members of
the group can be found by filtering for the group here:

https://review.openstack.org/#/admin/groups/

This group should be limited to those aware of the restrictions as we wrap up
the end of the cycle to make sure only release critical things are allowed to
be merged into stable/queens as we finalize things for the final release.

As always, just let me know if there are any questions.

--
Sean McGinnis (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Rocky PTG planning

2018-01-17 Thread Dmitry Tantsur

Hi all!

The PTG is slowly approaching. Make sure to do your visa paperwork (those 
unfortunate of us who need it) and let's start planning! Drop your ideas on the 
etherpad: https://etherpad.openstack.org/p/ironic-rocky-ptg


Please do check the rules there before proposing, and please add your attendance 
information in the bottom.


Finally, let me know if you can help with organizing a social event in Dublin.

Thanks!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] FFE - Requesting FFE for Routed Networks support.

2018-01-17 Thread Dmitry Tantsur

Hi!

I'm essentially +1 on granting this FFE, as it's a low-risk work for a great 
feature. See one comment inline.


On 01/17/2018 10:54 AM, Harald Jensås wrote:

Requesting FFE for Routed Network support in networking-baremetal.
---


# Pros
--
With the patches up for review[7] we have a working ml2 agent;
__depends on neutron fix__; and mechanism driver combination that
enables support to bind ports on neutron routed networks.

Specifically we report the bridge_mappings data to neutron, which
enable the _find_candidate_subnets() method in neutron ipam[1] to
succeed in finding a candidate subnet available to the ironic node when
ports on routed segments are bound.

This functionality will allow users to take advantage of the
functionality added in DHCP Agent[2] which enables the DHCP agent to
service other subnets on the network via DHCP relay. For Ironic this
means we can support deploying nodes on a remote L3 network, e.g
different datacenter or different rack/rack-row.



# Cons
--
Integration with placement does not currently work.

Neutron uses Nova host-aggregates in combination with Placement.
Specifically hosts are added to a host-aggregate for segments based on
SEGMENT_HOST_MAPPING. Ironic nodes cannot currently be added to host-
aggregates in Nova. Because of this the following will appear in the
neutron logs when ironic-neutron agent is started:
RESP BODY: {"itemNotFound": {"message": "Compute host  could not be found.", "code": 404}}

Also the placement api cannot be used to find good candidate ironic
nodes with a baremetal port on the correct segment. This will have to be worked 
around by the operator via capabilities and flavor properties or manual 
additions to resource providers in placement.

Depending on the direction of other projects, neutron and nova, the way
placement will finally work is not certain.

Either the nova work [3] and [4], or a neutron change to use placement
only or a fallback to placement in neutron would be possible. In either
case there should be no need to change the networking-baremetal agent
or mechanism driver.


# Risks
---
Unless this bug[5] is fixed we might break the current baremetal
mechanism driver functionality. I have proposed a patch[6] to neutron
that fix the issue. In case no fix lands for this neutron bug soon we
should probably push these changes to Rocky.


Let's add Depends-On to the first patch in the chain to make sure your patches 
don't merge until the fix is merged.





# Core reviewers

Julia Kreger, Sam Betts




[1] https://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/ip
am_backend_mixin.py#n697
[2] https://review.openstack.org/#/c/468744/
[3] https://review.openstack.org/#/c/421009/
[4] https://review.openstack.org/#/c/421011/
[5] https://bugs.launchpad.net/neutron/+bug/1743579
[6] https://review.openstack.org/#/c/534449/
[7] https://review.openstack.org/#/q/project:openstack/networking-barem
etal





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic-inspector] Resigning my core-reviewer duties

2018-01-04 Thread Dmitry Tantsur

On 01/03/2018 04:24 PM, milanisko k wrote:

Folks,

as announced already on the Ironic upstream meeting, I'm hereby resigning my 
core-reviewer duties. I've changed my downstream occupation recently and I won't 
be able to keep up anymore.


As I said many times, I'm really sad to hear it, but I'm glad that you've found 
new cool challenges :)


I have removed your rights. I've also done a similar change to Yuiko, who is 
apparently no longer active in the community. Thanks both for your incredible 
contributions that allowed ironic-inspector to be what it is now!




Thank you all, I really enjoyed collaborating with the wonderful Ironic 
community!

Best regards,
milan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Removing old baremetal commands from python-tripleoclient

2017-12-15 Thread Dmitry Tantsur

On 12/15/2017 01:04 PM, Dmitry Tantsur wrote:

On 12/15/2017 04:49 AM, Tony Breeds wrote:

Hi All,
 In review I01837a9daf6f119292b5a2ffc361506925423f11 I updated
ValidateInstackEnv to handle the case when then instackenv.json file
needs to represent a node that deosn't require a pm_user for IMPI to
work.

It turns out that I foudn that code path with grep rather than the
result of a deploy step failing.  That's becuase it's only used for a
command that isn't used anymore, and the validation logic has been moved
to a mistral action.

That lead me to look at which of the commands in that file aren't needed
anymore.  If my analysis is correct we have the collowing commands:

openstack baremetal instackenv validate:
 tripleoclient.v1.baremetal:ValidateInstackEnv
 NOT Deprecated


See below, it can be fixed. But I'd really prefer us to roll it into something 
like "openstack overcloud node import --validate-only".



openstack baremetal import:
 tripleoclient.v1.baremetal:ImportBaremetal
 DEPRECATED in b272a5c6 2017-01-03
 New command: openstack overcloud node import
openstack baremetal introspection bulk start:
 tripleoclient.v1.baremetal:StartBaremetalIntrospectionBulk
 DEPRECATED in b272a5c6 2017-01-03
 New command: openstack overcloud node introspect
openstack baremetal introspection bulk status:
 tripleoclient.v1.baremetal:StatusBaremetalIntrospectionBulk
 NOT Deprecated


This should really be deprecated with "bulk start"..


openstack baremetal configure ready state:
 tripleoclient.v1.baremetal:ConfigureReadyState
 NOT Deprecated


I wonder if this even works. It was introduces long ago, and has never had a lot 
of testing (if at all).



openstack baremetal configure boot:
 tripleoclient.v1.baremetal:ConfigureBaremetalBoot
 DEPRECATED in b272a5c6 2017-01-03
 New command: openstack overcloud node configure


YES PLEASE to all of this. The "baremetal" part make users often confuse these 
commands with ironicclient commands.




So my questions are basically:
1) Can we remove the deprecated code?
2) Does leaving the not deprecated commands make sesne?
3) Should we deprecate the remaining commands?
3) Do I need to update ValidateInstackEnv or is it okay for it to be
    busted for my use case?


I'm sorry for not getting to it ever, but the fix should be quite simple. You 
need to drop all its code from tripleoclient and make it use this workflow 
instead: 
https://github.com/openstack/tripleo-common/blob/master/workbooks/baremetal.yaml#L103. 
It is much newer, and is actually used in enrollment as well. If it is also 
broken for you - please fix it. But the code in tripleoclient is long rotten :)




Yours Tony.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   6   7   >