[openstack-dev] [infra] shutting down pholio.openstack.org

2017-06-12 Thread Ian Wienand
Hello,

We will be shutting down pholio.openstack.org in the next few days.

As discussed at the last #infra meeting [1], in short, "the times they
are a changin'" and the Pholio services have not been required.

Of course the original deployment puppet, etc, remains (see [2]), so
you may reach out to the infra team if this is required in the future.

Thanks,

-i

[1] 
http://eavesdrop.openstack.org/meetings/infra/2017/infra.2017-06-06-19.03.html
[2] https://specs.openstack.org/openstack-infra/infra-specs/specs/pholio.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-12 Thread Chris Friesen

On 06/12/2017 01:50 PM, Flavio Percoco wrote:


Glance can be very exciting if one focuses on the interesting bits and it's an
*AWESOME* place where new comers can start contributing, new developers can
learn and practice, etc. That said, I believe that code doesn't have to be
challenging to be exciting. There's also excitment in the simple but interesting
things.


As an outsider, I found it harder to understand the glance code than the nova 
code...and that's saying something. :)


From the naive external viewpoint, it just doesn't seem like what glance is 
doing should be all that complicated, and yet somehow I found it to be so.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-12 Thread Mikhail Fedosin
On Tue, Jun 13, 2017 at 4:43 AM, Flavio Percoco  wrote:

>
>
> On Mon, Jun 12, 2017, 19:47 Mikhail Fedosin  wrote:
>
>> On Tue, Jun 13, 2017 at 12:01 AM, Flavio Percoco 
>> wrote:
>>
>>> On 12/06/17 23:20 +0300, Mikhail Fedosin wrote:
>>>
 My opinion is that Glance stagnates and it's really hard to implement
 new
 features there. In two years, only one major improvement was developed
 (Image Import Refactoring), and no one has tested it in production yet.
 And
 this is in the heyday of the community, as you said!

>>>
>>> You're skipping 2 important things here:
>>>
>>> The first one is that focusing on the image import refactor (IIR) was a
>>> community choice. It's fixing a bigger problem that requires more focus.
>>> The
>>> design of the feature took a couple of cycles too, not the
>>> implementation. The
>>> second thing is that the slow pace may also be caused by the lack of
>>> contributors.
>>
>>
>> It's exactly what I'm talking about - implementing medium-size feature
>> (IIR is about 600 lines of code [1][2]) took 1 year of discussions and 1
>> year for implementation of 5 full-time developers. And most importantly, it
>> took all the community attention. What if we need to implement more serious
>> features? How much time will it take, given that there are not so many
>> developers left?
>>
>
>
> What I was referring to is that this is not the normal case. The IIR was a
> special case, which doesn't mean implementing features is easy, as you
> mentioned.
>
> On the other hand OpenStack users have been requesting for new features for
 a long time: I'm talking about mutistore support, versioning of images,
 image slicing (like in docker), validation and conversion of uploading
 data
 and so on. And I can say that it is impossible to implement them without
 breaking Glance. But all this stuff is already done in Glare (multistore
 support is implemented partially, because modifications of glance_store
 are
 required). And if we switch OpenStack to Glare users will get these
 features out of the box.

>>>
>>> Some of these features could be implemented in Glance. As you mentioned,
>>> the
>>> code base is over-engineered but it could be simplified.
>>
>>
>> Everything is possible, I know that. But at what cost?
>>
>
>
> Exactly! This is what I'm asking you to help me out with. I'm trying to
> have a constructive discussion on the cost of this and find a sohort term
> solution and then a long term one.
>

> I don't think the current problem is caused by Glance's lack of "exciting"
>>> features and I certainly don't think replacing it with Glare would be of
>>> any
>>> help now. It may be something we want to think about in the future (and
>>> this is
>>> not the first time I say this) but what you're proposing will be an
>>> expensive
>>> distraction from the real problem.
>>
>>
>> And for the very last time - I don't suggest to replace Glance now or
>> even in a year. At the moment, an email with the title "Glance needs help,
>> it's getting critical" is enough.
>> I call to think about the distant future, probably two years or near
>> that. What can prevent Flavio from writing of such emails in T cycle?
>> Bringing people from Nova and Cinder part-time will not work, because, as
>> we discussed above, even medium-size feature requires years of dedicated
>> work, and having their +1 on typo fixes... what's the benefit of that?
>>
>
> Fully agree here. What I think we need is a short term and a long term
> solution. Would you agree with this?
>
> I mentioned in my previous email that I've never been opposed to a future
> transition away from Glance as soon as this happens naturally.
>
> I understand that you're not proposing to replace Glance now. What I was
> trying to understand is why you thought migratinf away from Glance in the
> future would help us now.
>

It won't help immediately for sure. But in long-term I see next benefits:
* We will have two full-time contributors from Nokia (can have more if it's
necessary)
* Architecture is simpler, all functions are small and well documented. I
believe it will take one-two days for a new developer to get accustomed
with it.
* For me it's much easier to write code and review patches in Glare and I
will spend more time for it.
* Integration with more projects: if Heat, Mistral, Murano will store their
data in Glare we will get more feedback.
* Long waiting features! For example, Glare has database store support, and
users can put their small files (like heat templates) directly in mysql
without a need of deploying Swift.


>
> And for the very last time - I'm here not to promote Glare. As you know, I
>> will soon be involved in this project extremely mediately. I'm here to
>> decide what to do with Glance next. In the original email Flavio said "So,
>> before things get even worse, I'd like us to brainstorm a bit on what
>> 

[openstack-dev] [Blazar] Skip weekly meeting

2017-06-12 Thread Masahito MUROI

Hi Blazar folks,

Based on the discussion in last meeting, the team will not have the 
weekly meeting this week because most of the members are out of town.


Next meeting is planed to be 20th Jun.

best regards,
Masahito


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-12 Thread Brian Rosmaita
I take a week off and look at what happens ...

Sorry for top-posting, but I just have some general comments.  Mike
raises some good points, but I think it's too late in the cycle to
swap Glance out for Glare and expect everything to work properly.  (I
don't mean to imply anything about the quality of the Glare code base
by this, the concern is whether we can get sufficient testing and code
changes completed so that we could be sure that the substitution of
Glare + Images API would be unnoticed by deployers.  I just don't see
that as realistic given our current personnel situation).  I think for
Pike we need to work within the Glance code base and focus on the
limited set of priorities that we've more or less agreed on [0], and
seriously discuss Mike's proposal at the PTG.

I'm glad Mike brought this up now, because it would be a big change,
and as you can see in the previous messages in this thread, there are
pluses and minuses that need to be carefully considered. So I think
that discussing this issue could be constructive, if our goal is to
have a successful resolution at the next PTG.  However, I personally
don't think it's a good development strategy for the OpenStack Pike
release, which is what we need to concentrate on in the short term.

cheers,
brian

[0] https://review.openstack.org/#/c/468035/



On Mon, Jun 12, 2017 at 9:43 PM, Flavio Percoco  wrote:
>
>
> On Mon, Jun 12, 2017, 19:47 Mikhail Fedosin  wrote:
>>
>> On Tue, Jun 13, 2017 at 12:01 AM, Flavio Percoco 
>> wrote:
>>>
>>> On 12/06/17 23:20 +0300, Mikhail Fedosin wrote:

 My opinion is that Glance stagnates and it's really hard to implement
 new
 features there. In two years, only one major improvement was developed
 (Image Import Refactoring), and no one has tested it in production yet.
 And
 this is in the heyday of the community, as you said!
>>>
>>>
>>> You're skipping 2 important things here:
>>>
>>> The first one is that focusing on the image import refactor (IIR) was a
>>> community choice. It's fixing a bigger problem that requires more focus.
>>> The
>>> design of the feature took a couple of cycles too, not the
>>> implementation. The
>>> second thing is that the slow pace may also be caused by the lack of
>>> contributors.
>>
>>
>> It's exactly what I'm talking about - implementing medium-size feature
>> (IIR is about 600 lines of code [1][2]) took 1 year of discussions and 1
>> year for implementation of 5 full-time developers. And most importantly, it
>> took all the community attention. What if we need to implement more serious
>> features? How much time will it take, given that there are not so many
>> developers left?
>
>
>
> What I was referring to is that this is not the normal case. The IIR was a
> special case, which doesn't mean implementing features is easy, as you
> mentioned.
>
 On the other hand OpenStack users have been requesting for new features
 for
 a long time: I'm talking about mutistore support, versioning of images,
 image slicing (like in docker), validation and conversion of uploading
 data
 and so on. And I can say that it is impossible to implement them without
 breaking Glance. But all this stuff is already done in Glare (multistore
 support is implemented partially, because modifications of glance_store
 are
 required). And if we switch OpenStack to Glare users will get these
 features out of the box.
>>>
>>>
>>> Some of these features could be implemented in Glance. As you mentioned,
>>> the
>>> code base is over-engineered but it could be simplified.
>>
>>
>> Everything is possible, I know that. But at what cost?
>
>
>
> Exactly! This is what I'm asking you to help me out with. I'm trying to have
> a constructive discussion on the cost of this and find a sohort term
> solution and then a long term one.
>
>>> I don't think the current problem is caused by Glance's lack of
>>> "exciting"
>>> features and I certainly don't think replacing it with Glare would be of
>>> any
>>> help now. It may be something we want to think about in the future (and
>>> this is
>>> not the first time I say this) but what you're proposing will be an
>>> expensive
>>> distraction from the real problem.
>>
>>
>> And for the very last time - I don't suggest to replace Glance now or even
>> in a year. At the moment, an email with the title "Glance needs help, it's
>> getting critical" is enough.
>> I call to think about the distant future, probably two years or near that.
>> What can prevent Flavio from writing of such emails in T cycle? Bringing
>> people from Nova and Cinder part-time will not work, because, as we
>> discussed above, even medium-size feature requires years of dedicated work,
>> and having their +1 on typo fixes... what's the benefit of that?
>
>
> Fully agree here. What I think we need is a short term and a long term
> solution. Would you agree with this?
>
> I mentioned in my 

Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-12 Thread Flavio Percoco
On Mon, Jun 12, 2017, 19:47 Mikhail Fedosin  wrote:

> On Tue, Jun 13, 2017 at 12:01 AM, Flavio Percoco 
> wrote:
>
>> On 12/06/17 23:20 +0300, Mikhail Fedosin wrote:
>>
>>> My opinion is that Glance stagnates and it's really hard to implement new
>>> features there. In two years, only one major improvement was developed
>>> (Image Import Refactoring), and no one has tested it in production yet.
>>> And
>>> this is in the heyday of the community, as you said!
>>>
>>
>> You're skipping 2 important things here:
>>
>> The first one is that focusing on the image import refactor (IIR) was a
>> community choice. It's fixing a bigger problem that requires more focus.
>> The
>> design of the feature took a couple of cycles too, not the
>> implementation. The
>> second thing is that the slow pace may also be caused by the lack of
>> contributors.
>
>
> It's exactly what I'm talking about - implementing medium-size feature
> (IIR is about 600 lines of code [1][2]) took 1 year of discussions and 1
> year for implementation of 5 full-time developers. And most importantly, it
> took all the community attention. What if we need to implement more serious
> features? How much time will it take, given that there are not so many
> developers left?
>


What I was referring to is that this is not the normal case. The IIR was a
special case, which doesn't mean implementing features is easy, as you
mentioned.

On the other hand OpenStack users have been requesting for new features for
>>> a long time: I'm talking about mutistore support, versioning of images,
>>> image slicing (like in docker), validation and conversion of uploading
>>> data
>>> and so on. And I can say that it is impossible to implement them without
>>> breaking Glance. But all this stuff is already done in Glare (multistore
>>> support is implemented partially, because modifications of glance_store
>>> are
>>> required). And if we switch OpenStack to Glare users will get these
>>> features out of the box.
>>>
>>
>> Some of these features could be implemented in Glance. As you mentioned,
>> the
>> code base is over-engineered but it could be simplified.
>
>
> Everything is possible, I know that. But at what cost?
>


Exactly! This is what I'm asking you to help me out with. I'm trying to
have a constructive discussion on the cost of this and find a sohort term
solution and then a long term one.

I don't think the current problem is caused by Glance's lack of "exciting"
>> features and I certainly don't think replacing it with Glare would be of
>> any
>> help now. It may be something we want to think about in the future (and
>> this is
>> not the first time I say this) but what you're proposing will be an
>> expensive
>> distraction from the real problem.
>
>
> And for the very last time - I don't suggest to replace Glance now or even
> in a year. At the moment, an email with the title "Glance needs help, it's
> getting critical" is enough.
> I call to think about the distant future, probably two years or near that.
> What can prevent Flavio from writing of such emails in T cycle? Bringing
> people from Nova and Cinder part-time will not work, because, as we
> discussed above, even medium-size feature requires years of dedicated work,
> and having their +1 on typo fixes... what's the benefit of that?
>

Fully agree here. What I think we need is a short term and a long term
solution. Would you agree with this?

I mentioned in my previous email that I've never been opposed to a future
transition away from Glance as soon as this happens naturally.

I understand that you're not proposing to replace Glance now. What I was
trying to understand is why you thought migratinf away from Glance in the
future would help us now.

And for the very last time - I'm here not to promote Glare. As you know, I
> will soon be involved in this project extremely mediately. I'm here to
> decide what to do with Glance next. In the original email Flavio said "So,
> before things get even worse, I'd like us to brainstorm a bit on what
> solutions/options we have now". I described in detail my personal feelings
> about the current situation in Glance for the members of TC, who are
> unfamiliar with the project.  And also I suggested one possible solution
> with Glare, maybe not the best one, but I haven't heard any other proposals
> .
>

I know you're not promoting Glare and O hope my emails are not coming
through as accusations of any kind. I'm playing the devil's advocate
because I would like us to explore the different options we have and you
proposed one.

 Instead of constructive discussion and decision making, I received a bunch
> of insults in private correspondence, accusations of betrayal and
> suggestions to drive me out of the community.
>

As you know, I was cc'd in the thread where this happened and I'm deeply
sorry it happened. As I mentioned in my reply to that thread, I know your
intentions are goos and I do not want you to go 

Re: [openstack-dev] [oslo.db] Stepping down from core

2017-06-12 Thread Mike Bayer

hey Roman -

It was a huge pleasure working w/ you on oslo.db!I hope we can 
collaborate again soon.


- mike



On 06/11/2017 10:32 AM, Roman Podoliaka wrote:

Hi all,

I recently changed job and hasn't been able to devote as much time to
oslo.db as it is expected from a core reviewer. I'm no longer working
on OpenStack, so you won't see me around much.

Anyway, it's been an amazing experience to work with all of you! Best
of luck! And see ya at various PyCon's around the world! ;)

Thanks,
Roman

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Stepping down from core

2017-06-12 Thread Davanum Srinivas
Best Wishes Roman! hope our paths will cross again.

Thanks,
Dims

On Sun, Jun 11, 2017 at 10:32 AM, Roman Podoliaka
 wrote:
> Hi all,
>
> I recently changed job and hasn't been able to devote as much time to
> oslo.db as it is expected from a core reviewer. I'm no longer working
> on OpenStack, so you won't see me around much.
>
> Anyway, it's been an amazing experience to work with all of you! Best
> of luck! And see ya at various PyCon's around the world! ;)
>
> Thanks,
> Roman
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-12 Thread Mikhail Fedosin
On Tue, Jun 13, 2017 at 12:01 AM, Flavio Percoco  wrote:

> On 12/06/17 23:20 +0300, Mikhail Fedosin wrote:
>
>> My opinion is that Glance stagnates and it's really hard to implement new
>> features there. In two years, only one major improvement was developed
>> (Image Import Refactoring), and no one has tested it in production yet.
>> And
>> this is in the heyday of the community, as you said!
>>
>
> You're skipping 2 important things here:
>
> The first one is that focusing on the image import refactor (IIR) was a
> community choice. It's fixing a bigger problem that requires more focus.
> The
> design of the feature took a couple of cycles too, not the implementation.
> The
> second thing is that the slow pace may also be caused by the lack of
> contributors.


It's exactly what I'm talking about - implementing medium-size feature (IIR
is about 600 lines of code [1][2]) took 1 year of discussions and 1 year
for implementation of 5 full-time developers. And most importantly, it took
all the community attention. What if we need to implement more serious
features? How much time will it take, given that there are not so many
developers left?


>
>
>
>> On the other hand OpenStack users have been requesting for new features
>> for
>> a long time: I'm talking about mutistore support, versioning of images,
>> image slicing (like in docker), validation and conversion of uploading
>> data
>> and so on. And I can say that it is impossible to implement them without
>> breaking Glance. But all this stuff is already done in Glare (multistore
>> support is implemented partially, because modifications of glance_store
>> are
>> required). And if we switch OpenStack to Glare users will get these
>> features out of the box.
>>
>
> Some of these features could be implemented in Glance. As you mentioned,
> the
> code base is over-engineered but it could be simplified.


Everything is possible, I know that. But at what cost?


>
>
> Then, Glance works with images only, but Glare supports various types of
>> data, like heat and tosca templates. Next week we will add Secrets
>> artifact
>> type to store private data, and Mistral workflows. I mean - we'll have
>> unified catalog of all cloud data with the possibility to combine them in
>> metastructures, when artifact of one type depends on the other.
>>
>
> Glance working only with images is a design choice and I don't think that's
> something bad. I also don't think Glare's support for other artifacts is
> bad.
> Just different choices.


The idea behind Glare is to give operators, but not the developers, the
opportunity to decide what types they want to use. Specify
"enabled_artifact_types=images" in glare.conf and you'll get a service that
works with images only (consider it as a feature if you want ;) ) Glance is
just a special case of Glare, and it's not a big deal for Glare to behave
like Glance in terms of "working only with images".


>
>
>
>> I will repeat it once again, in order to be understood as much as
>> possible.
>> It takes too much time to develop new features and fix old bugs (years to
>> be exact). If we continue in the same spirit, it certainly will not
>> increase the joy of OpenStack users and they will look for other solutions
>> that meet their desires.
>>
>
> Mike, I understand that you think that the broader set of features that
> Glare
> provides would be better for users, which is something I disagree with a
> bit.
> More features don't make a service better. What I'm failing to see,
> though, is
> why you believe that replacing Glance with Glare will solve the current
> problem.


I think that features are important, but sometimes stability matters too!
There are still a lot of dangerous and nasty bugs, that we can't fix
without breaking Glance.


>
> I don't think the current problem is caused by Glance's lack of "exciting"
> features and I certainly don't think replacing it with Glare would be of
> any
> help now. It may be something we want to think about in the future (and
> this is
> not the first time I say this) but what you're proposing will be an
> expensive
> distraction from the real problem.


And for the very last time - I don't suggest to replace Glance now or even
in a year. At the moment, an email with the title "Glance needs help, it's
getting critical" is enough.
I call to think about the distant future, probably two years or near that.
What can prevent Flavio from writing of such emails in T cycle? Bringing
people from Nova and Cinder part-time will not work, because, as we
discussed above, even medium-size feature requires years of dedicated work,
and having their +1 on typo fixes... what's the benefit of that?

And for the very last time - I'm here not to promote Glare. As you know, I
will soon be involved in this project extremely mediately. I'm here to
decide what to do with Glance next. In the original email Flavio said "So,
before things get even worse, I'd like us to brainstorm a bit on what

Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-12 Thread Flavio Percoco
On Mon, Jun 12, 2017, 19:25 Mike Perez  wrote:

> On 16:01 Jun 12, Flavio Percoco wrote:
> > On 12/06/17 23:20 +0300, Mikhail Fedosin wrote:
> > > My opinion is that Glance stagnates and it's really hard to implement
> new
> > > features there. In two years, only one major improvement was developed
> > > (Image Import Refactoring), and no one has tested it in production
> yet. And
> > > this is in the heyday of the community, as you said!
> >
> > You're skipping 2 important things here:
> >
> > The first one is that focusing on the image import refactor (IIR) was a
> > community choice. It's fixing a bigger problem that requires more focus.
> The
> > design of the feature took a couple of cycles too, not the
> implementation. The
> > second thing is that the slow pace may also be caused by the lack of
> > contributors.
>
> +1 image import refactor work. That's great that the image import refactor
> work
> is done!
>
> Mikhail,
>
> I'm pretty thorough on reading this list for the dev digest, so even I
> missed
> that news. Which release was that done in? Are people not using it in
> production right away because of having to upgrade to a new release?
>


It's actually coming out with Pike. Patches landed last week.

Flavio
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-12 Thread Mike Perez
On 16:01 Jun 12, Flavio Percoco wrote:
> On 12/06/17 23:20 +0300, Mikhail Fedosin wrote:
> > My opinion is that Glance stagnates and it's really hard to implement new
> > features there. In two years, only one major improvement was developed
> > (Image Import Refactoring), and no one has tested it in production yet. And
> > this is in the heyday of the community, as you said!
> 
> You're skipping 2 important things here:
> 
> The first one is that focusing on the image import refactor (IIR) was a
> community choice. It's fixing a bigger problem that requires more focus. The
> design of the feature took a couple of cycles too, not the implementation. The
> second thing is that the slow pace may also be caused by the lack of
> contributors.

+1 image import refactor work. That's great that the image import refactor work
is done! 

Mikhail,

I'm pretty thorough on reading this list for the dev digest, so even I missed
that news. Which release was that done in? Are people not using it in
production right away because of having to upgrade to a new release?

-- 
Mike Perez


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Goodbye^W See you later

2017-06-12 Thread Mike Perez
On 08:45 Jun 08, Jim Rollenhagen wrote:
> Hey friends,
> 
> I've been mostly missing for the past six weeks while looking for a new
> job, so maybe you've forgotten me already, maybe not. I'm happy to tell you
> I've found one that I think is a great opportunity for me. But, I'm sad to
> tell you that it's totally outside of the OpenStack community.
> 
> The last 3.5 years have been amazing. I'm extremely grateful that I've been
> able to work in this community - I've learned so much and met so many
> awesome people. I'm going to miss the insane(ly awesome) level of
> collaboration, the summits, the PTGs, and even some of the bikeshedding.
> We've built amazing things together, and I'm sure y'all will continue to do
> so without me.
> 
> I'll still be lurking in #openstack-dev and #openstack-ironic for a while,
> if people need me to drop a -2 or dictate old knowledge or whatever, feel
> free to ping me. Or if you just want to chat. :)

Really appreciated your time as PTL, and congrats on the future.

-- 
Mike Perez


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Stepping down from core

2017-06-12 Thread Mike Perez
On 17:32 Jun 11, Roman Podoliaka wrote:
> Hi all,
> 
> I recently changed job and hasn't been able to devote as much time to
> oslo.db as it is expected from a core reviewer. I'm no longer working
> on OpenStack, so you won't see me around much.
> 
> Anyway, it's been an amazing experience to work with all of you! Best
> of luck! And see ya at various PyCon's around the world! ;)

Thanks for all your contributions, and congrats with the new job.

-- 
Mike Perez


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][ci] TripleO OVB check gates to move to third party

2017-06-12 Thread Ronelle Landy
Greetings,

TripleO OVB check gates are managed by upstream Zuul and executed on nodes
provided by test cloud RH1. RDO Cloud is now available as a test cloud to
be used when running CI jobs. To utilize to RDO Cloud, we could either:

- continue to run from upstream Zuul (and spin up nodes to deploy the
overcloud from RDO Cloud)
- switch the TripleO OVB check gates to run as third party and manage
these jobs from the Zuul instance used by Software Factory

The openstack infra team advocates moving to third party.
The CI team is meeting with Frederic Lepied, Alan Pevec, and other members
of the Software Factory/RDO project infra tream to discuss how this move
could be managed.

Note: multinode jobs are not impacted - and will continue to run from
upstream Zuul on nodes provided by nodepool.

Since a move to third party could have significant impact, we are posting
this out to gather feedback and/or concerns that TripleO developers may
have.


Thanks!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is Routes==2.3.1 a binary only package or something?

2017-06-12 Thread Michael Still
Certainly removing the "--no-binary :all:" results in a build that builds.
I'll test and see if it works todayish.

Michael

On Mon, Jun 12, 2017 at 9:56 PM, Chris Smart  wrote:

> On Mon, 12 Jun 2017, at 21:36, Michael Still wrote:
> > The experimental buildroot based ironic python agent bans all binaries, I
> > am not 100% sure why. Chris is the guy there.
> >
>
> Buildroot ironic python agent forces a build of all the
> ironic-python-agent dependencies (as per requirements and constraints)
> with no-binary :all:,  then builds ironic-python-agent wheel from the
> git clone, then it can just install them all from local compiled wheels
> into the target.[1]
>
> IIRC this was to make sure that the wheels matched the target. It could
> be all done wrong though.
>
> [1]
> https://github.com/csmart/ipa-buildroot/blob/master/
> buildroot-ipa/board/openstack/ipa/post-build.sh#L113
>
> -c
>



-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer][opendaylight][networking-odl] OpenDaylight Driver for Ceilometer

2017-06-12 Thread Isaku Yamahata
What's the policy of telemetry or ceilometer?

As long as it follows the policy of them,
networking-odl is fine to include such drivers.

Thanks,

On Mon, Jun 12, 2017 at 08:25:49AM +,
Deepthi V V  wrote:

> Hi,
> 
> We plan to propose a ceilometer driver for collecting network statistics 
> information from OpenDaylight. We were thinking if we could have the driver 
> code residing in networking-odl project instead of Ceilometer project. The 
> thought is we have OpenDaylight depended code restricted to n-odl repo. 
> Please let us know your thoughts on the same.
> 
> Thanks,
> Deepthi

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Isaku Yamahata 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Role updates

2017-06-12 Thread Alex Schultz
On Mon, Jun 12, 2017 at 2:55 AM, Dmitry Tantsur  wrote:
> On 06/09/2017 05:24 PM, Alex Schultz wrote:
>>
>> Hey folks,
>>
>> I wanted to bring to your attention that we've merged the change[0] to
>> add a basic set of roles that can be combined to create your own
>> roles_data.yaml as needed.  With this change the roles_data.yaml and
>> roles_data_undercloud.yaml files in THT should not be changed by hand.
>> Instead if you have an update to a role, please update the appropriate
>> roles/*.yaml file. I have proposed a change[1] to THT with additional
>> tools to validate that the roles/*.yaml files are updated and that
>> there are no unaccounted for roles_data.yaml changes.  Additionally
>> this change adds in a new tox target to assist in the generate of
>> these basic roles data files that we provide.
>>
>> Ideally I would like to get rid of the roles_data.yaml and
>> roles_data_undercloud.yaml so that the end user doesn't have to
>> generate this file at all but that won't happen this cycle.  In the
>> mean time, additional documentation around how to work with roles has
>> been added to the roles README[2].
>
>
> Hi, this is awesome! Do we expect more example roles to be added? E.g. I
> could add a role for a reference Ironic Conductor node.
>

Yes. My expectation is that as we come up with new roles for supported
deployment types that we add them to the THT/roles directory so end
user can also use them.  The base set came from some work we did
during the Ocata cycle to have 3 base sets of architectures.

3 controller, 3 compute, 1 ceph (ha)
1 controller, 1 compute, 1 ceph (nonha)
3 controller, 3 database, 3 messaging, 2 networker, 1 compute, 1 ceph (advanced)

Feel free to propose additional roles if you have architectures you'd
like to have be reusable.

Thanks,
-Alex


>>
>> Thanks,
>> -Alex
>>
>> [0] https://review.openstack.org/#/c/445687/
>> [1] https://review.openstack.org/#/c/472731/
>> [2]
>> https://github.com/openstack/tripleo-heat-templates/blob/master/roles/README.rst
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [forum] Future of Stackalytics

2017-06-12 Thread Jeremy Stanley
On 2017-06-12 14:07:51 -0700 (-0700), Ken'ichi Ohmichi wrote:
[...]
> The difference between current stackalytics config and the above API
> is stackalytics contains gerrit-id and launchpad-id on the config
> but the API doesn't. I guess we can use e-mail address instead of
> gerrit-id and launchpad-id and hope them from stackalytics config.
[...]

Right, I expect a sane design to be using E-mail addresses to
identify the mapping between the foundation's data, Gerrit and
Launchpad (this would also make it possible to correlate mailing
list posts if we wanted). Something along the lines of querying
Gerrit for the list of known E-mail addresses and then querying the
foundation directory for any profiles associated with each of those
addresses, building up a set of unique foundation IDs as it goes.

Once authentication for Gerrit moves to OpenStackID we can more
directly correlate Gerrit users and foundation profiles based on a
common OpenID.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] tripleo periodic jobs moving to RDO's software factory and RDO Cloud

2017-06-12 Thread Paul Belanger
On Mon, Jun 12, 2017 at 05:01:26PM -0400, Wesley Hayutin wrote:
> Greetings,
> 
> I wanted to send out a summary email regarding some work that is still
> developing and being planned to give interested parties time to comment and
> prepare for change.
> 
> Project:
> Move tripleo periodic promotion jobs
> 
> Goal:
> Increase the cadence of tripleo-ci periodic promotion jobs in a way
> that does not impact upstream OpenStack zuul queues and infrastructure.
> 
> Next Steps:
> The dependencies in RDO's instance of software factory are now complete
> and we should be able to create a new a net new zuul queue in RDO infra for
> tripleo-periodic jobs.  These jobs will have to run both multinode nodepool
> and ovb style jobs and utilize RDO-Cloud as the host cloud provider.  The
> TripleO CI team is looking into moving the TripleO periodic jobs running
> upstream to run from RDO's software factory instance. This move will allow
> the CI team more flexibility in managing the periodic jobs and resources to
> run the jobs more frequently.
> 
> TLDR:
> There is no set date as to when the periodic jobs will move. The move
> will depend on tenant resource allocation and how easily the periodic jobs
> can be modified.  This email is to inform the group that changes are being
> planned to the tripleo periodic workflow and allow time for comment and
> preparation.
> 
> Completed Background Work:
> After long discussion with Paul Belanger about increasing the cadence
> of the promotion jobs [1]. Paul explained infa's position and if he doesn't
> -1/-2 a new pipeline that has the same priority as check jobs someone else
> will. To summarize the point, the new pipeline would compete and slow down
> non-tripleo projects in the gate even when the hardware resources are our
> own.
> To avoid slowing down non-tripleo projects Paul has volunteered to help
> setup the infrastructure in rdoproject to manage the queue ( zuul etc). We
> would still use rh-openstack-1 / rdocloud for ovb, and could also trigger
> multinode nodepool jobs.
> There is one hitch though, currently, rdo-project does not have all the
> pieces of the puzzle in place to move off of openstack zuul and onto
> rdoproject zuul. Paul mentioned that nodepool-builder [2] is a hard
> requirement to be setup in rdoproject before we can proceed here. He
> mentioned working with the software factory guys to get this setup and
> running.
> At this time, I think this issue is blocked until further discussion.
> [1] https://review.openstack.org/#/c/443964/
> [2]
> https://github.com/openstack-infra/nodepool/blob/master/nodepool/builder.py
> 
> Thanks

The first step is landing the nodepool elements in nodepool.rdoproject.org, and
building a centos-7 DIB.  I believe number80 is currently working on this and
hopefully that could be landed in the next day or so.  Once images have been
built, it won't be much work to then run a job. RDO already has 3rdparty jobs
running, we'd to the same with tripleo-ci.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-12 Thread Flavio Percoco

On 12/06/17 23:20 +0300, Mikhail Fedosin wrote:

My opinion is that Glance stagnates and it's really hard to implement new
features there. In two years, only one major improvement was developed
(Image Import Refactoring), and no one has tested it in production yet. And
this is in the heyday of the community, as you said!


You're skipping 2 important things here:

The first one is that focusing on the image import refactor (IIR) was a
community choice. It's fixing a bigger problem that requires more focus. The
design of the feature took a couple of cycles too, not the implementation. The
second thing is that the slow pace may also be caused by the lack of
contributors.



On the other hand OpenStack users have been requesting for new features for
a long time: I'm talking about mutistore support, versioning of images,
image slicing (like in docker), validation and conversion of uploading data
and so on. And I can say that it is impossible to implement them without
breaking Glance. But all this stuff is already done in Glare (multistore
support is implemented partially, because modifications of glance_store are
required). And if we switch OpenStack to Glare users will get these
features out of the box.


Some of these features could be implemented in Glance. As you mentioned, the
code base is over-engineered but it could be simplified.


Then, Glance works with images only, but Glare supports various types of
data, like heat and tosca templates. Next week we will add Secrets artifact
type to store private data, and Mistral workflows. I mean - we'll have
unified catalog of all cloud data with the possibility to combine them in
metastructures, when artifact of one type depends on the other.


Glance working only with images is a design choice and I don't think that's
something bad. I also don't think Glare's support for other artifacts is bad.
Just different choices.



I will repeat it once again, in order to be understood as much as possible.
It takes too much time to develop new features and fix old bugs (years to
be exact). If we continue in the same spirit, it certainly will not
increase the joy of OpenStack users and they will look for other solutions
that meet their desires.


Mike, I understand that you think that the broader set of features that Glare
provides would be better for users, which is something I disagree with a bit.
More features don't make a service better. What I'm failing to see, though, is
why you believe that replacing Glance with Glare will solve the current problem.

I don't think the current problem is caused by Glance's lack of "exciting"
features and I certainly don't think replacing it with Glare would be of any
help now. It may be something we want to think about in the future (and this is
not the first time I say this) but what you're proposing will be an expensive
distraction from the real problem.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [forum] Future of Stackalytics

2017-06-12 Thread Ken'ichi Ohmichi
2017-06-08 10:51 GMT-07:00 Jeremy Stanley :
> On 2017-06-08 09:49:03 -0700 (-0700), Ken'ichi Ohmichi wrote:
>> 2017-06-08 7:19 GMT-07:00 Jeremy Stanley :
> [...]
>> > There is a foundation member directory API now which provides
>> > affiliation details and history, so if it were my project (it's
>> > not though) I'd switch to querying that and delete all the
>> > static affiliation mapping out of that config instead. Not only
>> > would it significantly reduce the reviewer load for
>> > Stackalytics, but it would also provide a greater incentive for
>> > contributors to keep their affiliation data updated in the
>> > foundation member directory.
>>
>> Interesting idea, thanks. It would be nice to centralize such
>> information into a single place. Can I know the detail of the API?
>> I'd like to take a look for some prototyping.
>
> It only _just_ rolled to production at
> https://openstackid-resources.openstack.org/api/public/v1/members
> yesterday so I don't know how stable it should be considered at this
> particular moment. The implementation is at
>  https://git.openstack.org/cgit/openstack-infra/openstackid-resources/tree/app/Models/Foundation/Main/Member.php
>  >
> but details haven't been added to the API documentation in that repo
> yet. (I also just now realized we haven't added a publishing job for
> those API docs either, so I'm working on that bit immediately.)
>
> The relevant GET parameters for this case are
> filter=email==someb...@example.com and relations=all_affiliations
> which gets you a list under the "affiliations" key with all
> start/end dates and organizations for the member associated with
> that address. This of course presumes contributors update their
> foundation profiles to include any E-mail addresses they use with
> Git, as well as recording appropriate affiliation timeframes. Those
> fields in the member directory profiles have existed for quite a few
> years now, so hopefully at least some of us have already done that.

Thanks for the info, Jeremy.

The difference between current stackalytics config and the above API
is stackalytics contains gerrit-id and launchpad-id on the config but
the API doesn't.
I guess we can use e-mail address instead of gerrit-id and
launchpad-id and hope them from stackalytics config.
I will dig more deeply anyways.

Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][ci] tripleo periodic jobs moving to RDO's software factory and RDO Cloud

2017-06-12 Thread Wesley Hayutin
Greetings,

I wanted to send out a summary email regarding some work that is still
developing and being planned to give interested parties time to comment and
prepare for change.

Project:
Move tripleo periodic promotion jobs

Goal:
Increase the cadence of tripleo-ci periodic promotion jobs in a way
that does not impact upstream OpenStack zuul queues and infrastructure.

Next Steps:
The dependencies in RDO's instance of software factory are now complete
and we should be able to create a new a net new zuul queue in RDO infra for
tripleo-periodic jobs.  These jobs will have to run both multinode nodepool
and ovb style jobs and utilize RDO-Cloud as the host cloud provider.  The
TripleO CI team is looking into moving the TripleO periodic jobs running
upstream to run from RDO's software factory instance. This move will allow
the CI team more flexibility in managing the periodic jobs and resources to
run the jobs more frequently.

TLDR:
There is no set date as to when the periodic jobs will move. The move
will depend on tenant resource allocation and how easily the periodic jobs
can be modified.  This email is to inform the group that changes are being
planned to the tripleo periodic workflow and allow time for comment and
preparation.

Completed Background Work:
After long discussion with Paul Belanger about increasing the cadence
of the promotion jobs [1]. Paul explained infa's position and if he doesn't
-1/-2 a new pipeline that has the same priority as check jobs someone else
will. To summarize the point, the new pipeline would compete and slow down
non-tripleo projects in the gate even when the hardware resources are our
own.
To avoid slowing down non-tripleo projects Paul has volunteered to help
setup the infrastructure in rdoproject to manage the queue ( zuul etc). We
would still use rh-openstack-1 / rdocloud for ovb, and could also trigger
multinode nodepool jobs.
There is one hitch though, currently, rdo-project does not have all the
pieces of the puzzle in place to move off of openstack zuul and onto
rdoproject zuul. Paul mentioned that nodepool-builder [2] is a hard
requirement to be setup in rdoproject before we can proceed here. He
mentioned working with the software factory guys to get this setup and
running.
At this time, I think this issue is blocked until further discussion.
[1] https://review.openstack.org/#/c/443964/
[2]
https://github.com/openstack-infra/nodepool/blob/master/nodepool/builder.py

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Making stack outputs static

2017-06-12 Thread Zane Bitter

On 12/06/17 16:21, Steven Hardy wrote:

I think we wanted to move to convergence anyway so I don't see a problem
with this.  I know there was some discussion about starting to test with
convergence in tripleo-ci, does anyone know what, if anything, happened with
that?

There's an experimental job that runs only on the heat repo
(gate-tripleo-ci-centos-7-ovb-nonha-convergence)

But yeah now seems like a good time to get something running more
regularly in tripleo-ci.


+1, there's no reason not to run a non-voting job against tripleo itself 
at this point IMHO. That would allow me to start tracking the memory use 
over time.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Making stack outputs static

2017-06-12 Thread Steven Hardy
On Mon, Jun 12, 2017 at 6:18 PM, Ben Nemec  wrote:
>
>
> On 06/09/2017 03:10 PM, Zane Bitter wrote:
>>
>> History lesson: a long, long time ago we made a very big mistake. We
>> treated stack outputs as things that would be resolved dynamically when
>> you requested them, instead of having values fixed at the time the
>> template was created or updated. This makes performance of reading
>> outputs slow, especially for e.g. large stacks, because it requires
>> making ReST calls, and it can result in inconsistencies between Heat's
>> internal model of the world and what it actually outputs.
>>
>> As unfortunate as this is, it's difficult to change the behaviour and be
>> certain that no existing users will get broken. For that reason, this
>> issue has never been addressed. Now is the time to address it.
>>
>> Here's the tracker bug: https://bugs.launchpad.net/heat/+bug/1660831
>>
>> It turns out that the correct fix is to store the attributes of a
>> resource in the DB - this accounts for the fact that outputs may contain
>> attributes of multiple resources, and that these resources might get
>> updated at different times. It also solves a related consistency issue,
>> which is that during a stack update a resource that is not updated may
>> nevertheless report new attribute values, and thus cause things
>> downstream to be updated, or to fail, unexpectedly (e.g.
>> https://bugzilla.redhat.com/show_bug.cgi?id=1430753#c13).
>>
>> The proposal[1] is to make this change in Pike for convergence stacks
>> only. This is to allow some warning for existing users who might be
>> relying on the current behaviour - at least if they control their own
>> cloud then they can opt to keep convergence disabled, and even once they
>> opt to enable it for new stacks they can keep using existing stacks in
>> legacy mode until they are ready to convert them to convergence or
>> replace them. In addition, it avoids the difficulty of trying to get
>> consistency out of the legacy path's crazy backup-stack shenanigans -
>> there's basically no way to get the outputs to behave in exactly the
>> same way in the legacy path as they will in convergence.
>>
>> This topic was raised at the Forum, and there was some feedback that:
>>
>> 1) There are users who require the old behaviour even after they move to
>> convergence.
>> 2) Specifically, there are users who don't have public API endpoints for
>> services other than Heat, and who rely on Heat proxying requests to
>> other services to get any information at all about their resources o.O
>> 3) There are users still using the legacy path (*cough*TripleO) that
>> want the performance benefits of quick output resolution.
>>
>> The suggestion is that instead of tying the change to the convergence
>> flag, we should make it configurable by the user on a per-stack basis.
>>
>> I am vehemently opposed to this suggestion.
>>
>> It's a total cop-out to make the user decide. The existing behaviour is
>> clearly buggy and inconsistent. Users are not, and should not have to
>> be, sufficiently steeped in the inner workings of Heat to be able to
>> decide whether and when to subject themselves to random inconsistencies
>> and hope for the best. If we make the change the default then we'll
>> still break people, and if we don't we'll still be saying "OMG, you
>> forgot to enable the --not-suck option??!" 10 years from now.
>>
>> Instead, this is what I'm proposing as the solution to the above feedback:
>>
>> 1) The 'show' attribute of each resource will be marked CACHE_NONE[2].
>> This ensures that the live data is always available via this attribute.
>> 2) When showing a resource's attributes via the API (as opposed to
>> referencing them from within a template), always return live values.[3]
>> Since we only store the attribute values that are actually referenced in
>> the template anyway, we more or less have to do this if we want the
>> attributes output through this API to be consistent with each other.
>> 3) Move to convergence. Seriously, the memory and database usage are
>> much improved, and there are even more memory improvements in the
>> pipeline,[4] and they might even get merged in Pike as long as we don't
>> have to stop and reimplement the attribute storage patches that they
>> depend on. If TripleO were to move to convergence in Queens, which I
>> believe is 100% feasible, then it would get the performance improvements
>> at least as soon as it would if we tried to implement attribute storage
>> in the legacy path.
>
>
> I think we wanted to move to convergence anyway so I don't see a problem
> with this.  I know there was some discussion about starting to test with
> convergence in tripleo-ci, does anyone know what, if anything, happened with
> that?

There's an experimental job that runs only on the heat repo
(gate-tripleo-ci-centos-7-ovb-nonha-convergence)

But yeah now seems like a good time to get something running more
regularly in tripleo-ci.

Steve


Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-12 Thread Mikhail Fedosin
Well... My suggestion is to keep Glance maintained and begin experimental
adoption of Glare. So this is not an immediate replacement, but the
evolution of the Image service.
My opinion is that Glance stagnates and it's really hard to implement new
features there. In two years, only one major improvement was developed
(Image Import Refactoring), and no one has tested it in production yet. And
this is in the heyday of the community, as you said!

On the other hand OpenStack users have been requesting for new features for
a long time: I'm talking about mutistore support, versioning of images,
image slicing (like in docker), validation and conversion of uploading data
and so on. And I can say that it is impossible to implement them without
breaking Glance. But all this stuff is already done in Glare (multistore
support is implemented partially, because modifications of glance_store are
required). And if we switch OpenStack to Glare users will get these
features out of the box.

Then, Glance works with images only, but Glare supports various types of
data, like heat and tosca templates. Next week we will add Secrets artifact
type to store private data, and Mistral workflows. I mean - we'll have
unified catalog of all cloud data with the possibility to combine them in
metastructures, when artifact of one type depends on the other.

I will repeat it once again, in order to be understood as much as possible.
It takes too much time to develop new features and fix old bugs (years to
be exact). If we continue in the same spirit, it certainly will not
increase the joy of OpenStack users and they will look for other solutions
that meet their desires.

Best,
Mike

On Mon, Jun 12, 2017 at 10:20 PM, Flavio Percoco  wrote:

> On 12/06/17 16:56 +0300, Mikhail Fedosin wrote:
>
>> Hello!
>>
>> Flavio raised a very difficult and important question, and I think that
>> we,
>> as community members, should decide what to do with Glance next.
>>
>
> Hi Mike,
>
>
> I will try to state my subjective opinion. I was involved in the Glance
>> project for almost three years and studied it fairly plank. I believe that
>> the main problem is that the project was designed extremely poorly. Glance
>> does not have many tasks to solve, but nevertheless, there are a lot of
>> Java design patterns used (factory of factories, visitors, proxy and other
>> things that are unnecessary in this case). All this leads to absolutely
>> sad
>> consequences, when in order to add an image tag over 180 objects of
>> different classes are created, the code execution passes through more than
>> 25 locations with a number of callbacks 3 times. So I can say that the
>> code
>> base is artificially over-complicated and incredibly inflated.
>>
>> The next problem is that over the years the code has grown by a number of
>> workarounds, which make it difficult to implement new changes - any change
>> leads to something breaking down somewhere else. In the long run, we get a
>> lot of pain associated with race conditions, hard-to-recover heisenbugs
>> and
>> other horrors of programmer's life. It is difficult to talk about
>> attracting new developers, because the developing of the code in such
>> conditions is mentally exhausting.
>>
>
> I don't disagree on this. The code base *is* over-engineered in many areas.
> However, I don't think this is a good reason to just throw the entire
> project
> away. With enough time and contributions, the code could be refactored.
>
> We can continue to deny the obvious, saying that Glance simply needs people
>> and everything will be wonderful. But unfortunately this is not so - we
>> should admit that it is simply not profitable to engage in further
>> development. I suggest thinking about moving the current code base into a
>> support mode and starting to develop an alternative (which I have been
>> doing for the past year and a half).
>>
>> If you are allergic to the word "artifacts", do not read the following
>> paragraph:
>>
>> We are actively developing the Glare project, which offers a universal
>> catalog of various binary data along with its metadata - at the moment the
>> catalog supports the storage of images of virtual machines and has feature
>> parity with Glance. The service is used in production by Nokia, and it was
>> thoroughly tested at various settings. Next week we plan to release the
>> first stable version and begin the integration with various projects of
>> OpenStack: Mistral and Vitrage in the first place.
>>
>> As a solution, I can propose to implement an additional API to Glare,
>> which
>> would correspond to OpenStack Image API v2 and test that OpenStack is able
>> to work on its basis. After that, leave Glance at rest and start
>> developing
>> Glare as a universal catalog of binary data for OpenStack.
>>
>
> Could you please elaborate more on why you think switching code bases is
> going
> to solve the current problem? In your email you talked about Glance's
> over-engineered 

Re: [openstack-dev] [User-committee] Action Items WG Chairs: Requesting your input to a cross Working Group session

2017-06-12 Thread Edgar Magana
WG Chairs,

Please, take a moment to add your date and time preferences in the doddle poll 
below prepared. We need all WG Chairs to attend this meeting. If you can’t 
attend any of the proposed times, please find a member of the WG that could 
attend and provide an update on the WG activities. If you are no longer 
available to help us as chair of any WG, let the UC committee to know as soon 
as possible to help in the transition to new chair(s).

Thanks,

User Committee

(Adding in Bcc All WG Chairs emails)


From: "MCCABE, JAMEY A" 
Date: Wednesday, May 31, 2017 at 10:10 AM
To: "'user-commit...@lists.openstack.org'" 
, "openstack-operat...@lists.openstack.org" 
, "openstack-dev@lists.openstack.org." 

Subject: [User-committee] Action Items WG Chairs: Requesting your input to a 
cross Working Group session

Working group (WG) chairs or delegates, please enter your name (and WG name) 
and what times you could meet at this poll: 
https://beta.doodle.com/poll/6k36zgre9ttciwqz#table

As back ground and to share progress:
-  We started and generally confirmed the desire to have a regular 
cross WG status meeting at the Boston Summit.
-  Specifically the groups interested in Telco NFV and Fog Edge agreed 
to collaborate more often and in a more organized fashion.
-  In e-mails and then in today’s Operators Telco/NFV we finalized a 
proposal to have all WGs meet for high level status monthly and to bring the 
collaboration back to our individual WG sessions.
-  the User Committee sessions are appropriate for the Monthly WG 
Status meeting
-  more detailed coordination across Telco/NFV and Fog Edge groups 
should take place in the Operators Telco NFV WG meetings which already occur 
every 2 weeks.
-  we need participation of each WG Chair (or a delegate)
-  we welcome and request the OPNFV and Linux Foundation and other WGs 
to join us in the cross WG status meetings

The Doodle was setup to gain concurrence for a time of week in which we could 
schedule and is not intended to be for a specific week.

​Jamey McCabe – AT Integrated Cloud -jm6819 - mobile if needed 
847-496-1176


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-12 Thread Flavio Percoco

On 12/06/17 15:37 -0400, Sean Dague wrote:

On 06/12/2017 03:20 PM, Flavio Percoco wrote:


Could you please elaborate more on why you think switching code bases is
going
to solve the current problem? In your email you talked about Glance's
over-engineered code as being the thing driving people away and while I
disagree
with that statement, I'm wondering whether you really think that's the
motivation or there's something else.

Let's not talk about proxy API's or ways we would migrate users. I'd
like to
understand why *you* (or others) might think that a complete change of
projects
is a good solution to this problem.

Ultimatedly, I believe Glance, in addition to not being the "sexiest"
project in
OpenStack, is taking the hit of the recent lay-offs, which it kinda
managed to
avoid last year.


As someone from the outside the glance team, I'd really like to avoid
the artifacts path. I feel like 2 years ago there was a promise that if
glance headed in that direction it would bring in new people, and
everything would be great. But, it didn't bring in folks solving the
class of issues that current glance users are having. 80+ GB disk images
could be classified as a special case of Artifacts, but it turns
optimizing for their specialness is really important to a well
functioning cloud.

Glance might not be the most exciting project, but what seems to be
asked for is help on the existing stuff. I'd rather focus there.


Just want to make clear that I'm *not* proposing going down any artifacts path.
I actually disagree with this idea but I do want to understand why other folks
think this is going to solve the issue. There might be some insights there that
we can learn from and use to improve Glance (or not).

Glance can be very exciting if one focuses on the interesting bits and it's an
*AWESOME* place where new comers can start contributing, new developers can
learn and practice, etc. That said, I believe that code doesn't have to be
challenging to be exciting. There's also excitment in the simple but interesting
things.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-12 Thread Sean Dague
On 06/12/2017 03:20 PM, Flavio Percoco wrote:

> Could you please elaborate more on why you think switching code bases is
> going
> to solve the current problem? In your email you talked about Glance's
> over-engineered code as being the thing driving people away and while I
> disagree
> with that statement, I'm wondering whether you really think that's the
> motivation or there's something else.
> 
> Let's not talk about proxy API's or ways we would migrate users. I'd
> like to
> understand why *you* (or others) might think that a complete change of
> projects
> is a good solution to this problem.
> 
> Ultimatedly, I believe Glance, in addition to not being the "sexiest"
> project in
> OpenStack, is taking the hit of the recent lay-offs, which it kinda
> managed to
> avoid last year.

As someone from the outside the glance team, I'd really like to avoid
the artifacts path. I feel like 2 years ago there was a promise that if
glance headed in that direction it would bring in new people, and
everything would be great. But, it didn't bring in folks solving the
class of issues that current glance users are having. 80+ GB disk images
could be classified as a special case of Artifacts, but it turns
optimizing for their specialness is really important to a well
functioning cloud.

Glance might not be the most exciting project, but what seems to be
asked for is help on the existing stuff. I'd rather focus there.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-12 Thread Flavio Percoco

On 12/06/17 16:56 +0300, Mikhail Fedosin wrote:

Hello!

Flavio raised a very difficult and important question, and I think that we,
as community members, should decide what to do with Glance next.


Hi Mike,



I will try to state my subjective opinion. I was involved in the Glance
project for almost three years and studied it fairly plank. I believe that
the main problem is that the project was designed extremely poorly. Glance
does not have many tasks to solve, but nevertheless, there are a lot of
Java design patterns used (factory of factories, visitors, proxy and other
things that are unnecessary in this case). All this leads to absolutely sad
consequences, when in order to add an image tag over 180 objects of
different classes are created, the code execution passes through more than
25 locations with a number of callbacks 3 times. So I can say that the code
base is artificially over-complicated and incredibly inflated.

The next problem is that over the years the code has grown by a number of
workarounds, which make it difficult to implement new changes - any change
leads to something breaking down somewhere else. In the long run, we get a
lot of pain associated with race conditions, hard-to-recover heisenbugs and
other horrors of programmer's life. It is difficult to talk about
attracting new developers, because the developing of the code in such
conditions is mentally exhausting.


I don't disagree on this. The code base *is* over-engineered in many areas.
However, I don't think this is a good reason to just throw the entire project
away. With enough time and contributions, the code could be refactored.


We can continue to deny the obvious, saying that Glance simply needs people
and everything will be wonderful. But unfortunately this is not so - we
should admit that it is simply not profitable to engage in further
development. I suggest thinking about moving the current code base into a
support mode and starting to develop an alternative (which I have been
doing for the past year and a half).

If you are allergic to the word "artifacts", do not read the following
paragraph:

We are actively developing the Glare project, which offers a universal
catalog of various binary data along with its metadata - at the moment the
catalog supports the storage of images of virtual machines and has feature
parity with Glance. The service is used in production by Nokia, and it was
thoroughly tested at various settings. Next week we plan to release the
first stable version and begin the integration with various projects of
OpenStack: Mistral and Vitrage in the first place.

As a solution, I can propose to implement an additional API to Glare, which
would correspond to OpenStack Image API v2 and test that OpenStack is able
to work on its basis. After that, leave Glance at rest and start developing
Glare as a universal catalog of binary data for OpenStack.


Could you please elaborate more on why you think switching code bases is going
to solve the current problem? In your email you talked about Glance's
over-engineered code as being the thing driving people away and while I disagree
with that statement, I'm wondering whether you really think that's the
motivation or there's something else.

Let's not talk about proxy API's or ways we would migrate users. I'd like to
understand why *you* (or others) might think that a complete change of projects
is a good solution to this problem.

Ultimatedly, I believe Glance, in addition to not being the "sexiest" project in
OpenStack, is taking the hit of the recent lay-offs, which it kinda managed to
avoid last year.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] this week's priorities and subteam reports

2017-06-12 Thread Yeleswarapu, Ramamani
Hi,

We are glad to present this week's priorities and subteam report for Ironic. As 
usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. booting from volume:
1.1. Wire in storage interface attach/detach operations: 
https://review.openstack.org/#/c/406290
2. Rolling upgrades:
2.1.  'Add new dbsync command with first online data migration': 
https://review.openstack.org/#/c/408556/
3. OSC feature parity: a few really small and simple patches:
3.1. add OSC 'baremetal driver property list' command: 
https://review.openstack.org/#/c/381153/
3.2. add OSC 'baremetal driver raid property list' cmd: 
https://review.openstack.org/#/c/362047/
3.3. OSC 'port create' missing --uuid: 
https://review.openstack.org/#/c/472390/
3.4. OSC 'port set' missing ability to set local-link-connection, 
pxe-enabled: https://review.openstack.org/#/c/472457/
3.5. OSC 'node list' missing ability to filter on driver: 
https://review.openstack.org/#/c/472462/
4. Physical network topology awareness:
4.1. Move create_port to conductor: https://review.openstack.org/469931


Bugs (dtantsur, vdrok, TheJulia)

- Stats (diff between 5 Jun 2017 and 12 Jun 2017)
- Ironic: 250 bugs (+1) + 254 wishlist items (+2). 21 new (-3), 204 in progress 
(+8), 0 critical (-1), 31 high (+1) and 32 incomplete
- Inspector: 15 bugs (+1) + 30 wishlist items. 1 new, 15 in progress (+3), 0 
critical, 3 high (+1) and 3 incomplete
- Nova bugs with Ironic tag: 12. 2 new, 0 critical, 0 high

Essential Priorities


CI refactoring and missing test coverage

- Standalone CI tests (vsaienk0)
- next patch to be reviewed, needed for 3rd party CI: 
https://review.openstack.org/#/c/429770/
- Missing test coverage (all)
- portgroups and attach/detach tempest tests: 
https://review.openstack.org/382476
- local boot with partition images: TODO 
https://bugs.launchpad.net/ironic/+bug/1531149
- adoption: https://review.openstack.org/#/c/344975/
- should probably be changed to use standalone tests
- root device hints: TODO

Generic boot-from-volume (TheJulia, dtantsur)
-
- specs and blueprints:
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/volume-connection-information.html
- code: https://review.openstack.org/#/q/topic:bug/1526231
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/boot-from-volume-reference-drivers.html
- code: https://review.openstack.org/#/q/topic:bug/1559691
- https://blueprints.launchpad.net/nova/+spec/ironic-boot-from-volume
- code: 
https://review.openstack.org/#/q/topic:bp/ironic-boot-from-volume
- status as of most recent weekly meeting:
- Patches updated and re-stacked against current master branch last week.
- hshiina is looking in Nova side changes and is attempting to obtain 
clarity on some of the issues that tenant network separation introduced into 
the deployment workflow.
- Patch/note tracking etherpad: https://etherpad.openstack.org/p/Ironic-BFV
Ironic Patches:
https://review.openstack.org/#/c/406290 - Wiring in attach/detach 
operations - Has 1x+2 and 2x-1 - Questions raised and
https://review.openstack.org/#/c/413324 - iPXE template - Has 
positive review feedback
https://review.openstack.org/#/c/454243/ - Skip deployment if BFV - 
Has 1x+2 and 1x-1 - Possibly requires a minor revision.
https://review.openstack.org/#/c/214586/ - Volume Connection 
Information Rest API Change - Erroneous CI failure - needs to be rechecked - 
Above patch will generate a rebase.
Additional patches exist, for python-ironicclient and one for nova.  
Links in the patch/note tracking etherpad.

Rolling upgrades and grenade-partial (rloo, jlvillal)
-
- spec approved; code patches: 
https://review.openstack.org/#/q/topic:bug/1526283
- status as of most recent weekly meeting:
- redesign (conceptually simpler) of the way IronicObject versioning is 
handled by services running different releases was approved: 'Rolling upgrades: 
different object versions' https://review.openstack.org/#/c/469940/
- 'Add version column' https://review.openstack.org/#/c/412397/ has merged
- next patch ready for reviews: 'Add new dbsync command with first online 
data migration': https://review.openstack.org/#/c/408556/
- Testing work: done as per spec, but rloo wants to ask vasyl whether we 
can improve. grenade test will do upgrade so we have old API sending requests 
to old and/or new conductor, but rloo doesn't think there is anything to 
control -which- conductor handles the request, so what if old conductor handles 
all the requests?


Re: [openstack-dev] [tripleo] pike m2 has been released

2017-06-12 Thread Emilien Macchi
On Mon, Jun 12, 2017 at 7:20 PM, Ben Nemec  wrote:
>
>
> On 06/09/2017 05:39 PM, Emilien Macchi wrote:
>>
>> On Fri, Jun 9, 2017 at 5:01 PM, Ben Nemec  wrote:
>>>
>>> Hmm, I was expecting an instack-undercloud release as part of m2.  Is
>>> there
>>> a reason we didn't do that?
>>
>>
>> You just released a new tag: https://review.openstack.org/#/c/471066/
>> with a new release model, why would we release m2? In case you want
>> it, I think we can still do it on Monday.
>
>
> It was a new tag, but the same commit as m1 so it isn't really a new
> release, just a re-tag of the same release we already had.  Part of my
> reasoning for doing that was that it would get a new release for m2.

Sorry, I was confused, my bad. Done: https://review.openstack.org/473561

>
>>
>>> On 06/08/2017 03:47 PM, Emilien Macchi wrote:


 We have a new release of TripleO, pike milestone 2.
 All bugs targeted on Pike-2 have been moved into Pike-3.

 I'll take care of moving the blueprints into Pike-3.

 Some numbers:
 Blueprints: 3 Unknown, 18 Not started, 14 Started, 3 Slow progress, 11
 Good progress, 9 Needs Code Review, 7 Implemented
 Bugs: 197 Fix Released

 Thanks everyone!

>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][tripleo] Add ganesha puppet module

2017-06-12 Thread Emilien Macchi
On Mon, Jun 12, 2017 at 12:27 PM, Jan Provaznik  wrote:
> Hi,
> we would like to use nfs-ganesha for accessing shares on ceph storage
> cluster[1]. There is not yet a puppet module which would install and
> configure nfs-ganesha service. So to be able to set up nfs-ganesha with
> TripleO, I'd like to create a new ganesha puppet module under
> openstack-puppet umbrella unless there is a disagreement?

If you see it benefit to re-use our libraries, OpenStack Infra, our
cookiecutter and our release management, then I guess yes, this is the
right place, like we have puppet-ceph already for example.
+1 from me.

> Thanks, Jan
>
> [1] https://blueprints.launchpad.net/tripleo/+spec/nfs-ganesha
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Allocating Complex Resources

2017-06-12 Thread Jay Pipes

On 06/12/2017 02:17 PM, Edward Leafe wrote:
On Jun 12, 2017, at 10:20 AM, Jay Pipes > wrote:


The RP uuid is part of the provider: the compute node's uuid, and 
(after https://review.openstack.org/#/c/469147/ merges) the PCI 
device's uuid. So in the code that passes the PCI device information 
to the scheduler, we could add that new uuid field, and then the 
scheduler would have the information to a) select the best fit and 
then b) claim it with the specific uuid. Same for all the other 
nested/shared devices.


How would the scheduler know that a particular SRIOV PF resource 
provider UUID is on a particular compute node unless the placement API 
returns information indicating that SRIOV PF is a child of a 
particular compute node resource provider?


Because PCI devices are per compute node. The HostState object populates 
itself from the compute node here:


https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L224-L225

If we add the UUID information to the PCI device, as the above-mentioned 
patch proposes, when the scheduler selects a particular compute node 
that has the device, it uses the PCI device’s UUID. I thought that 
having that information in the scheduler was what that patch was all about.


I would hope that over time, there's be little to no need for the 
scheduler to read either the compute_nodes or the pci_devices tables 
(which, btw, are in the cell databases). The information that the 
scheduler currently keeps in the host state objects should eventually be 
able to be primarily constructed by the returned results from the 
placement API instead of the existing situation where the scheduler must 
make multiple calls to the multiple cells databases in order to fill 
that information in.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] upcoming impact to configuration in Swift 2.15.0

2017-06-12 Thread John Dickinson
## Summary

Swift storage policies using ISA-L Vandermonde (`isa_l_rs_vand`) and
having 5 or more parity bits will no longer be allowed. Swift's
services will refuse to start unless these policies are deprecated.
All existing data in these policies should be migrated to a different
storage policy as soon as possible.

Using ISA-L's Cauchy mode (`isa_l_rs_cauchy`) with 5 or more parity
bits is safe (as is Cauchy mode with less than 5 parity bits). Using
ISA-L's Vandermonde mode (`isa_l_rs_vand`) with less than 5 parity bits
is safe.

This change is expected in the next Swift release (2.15.0) and will
be included in Pike.

## Background

Late last year, we discovered that a particular config setting for
erasure codes in Swift would expose a bug in one of the supported
erasure coded libraries (Intel's ISA-L) and could result in data
becoming corrupted. **THIS DATA CORRUPTION BUG HAS BEEN FIXED**, and
it was included in liberasurecode 1.3.1. We also bumped the dependency
version for liberasurecode in Swift to remove the immediate danger to
Swift clusters.

When we updated the liberasurecode version dependency, we also added a
warning in Swift if we detected a storage policy using `isa_l_rs_vand`
with 5 or more parity bits. These changes were released in Swift 2.13.0
(and as the OpenStack Ocata release).

An example bad erasure code policy config in `/etc/swift.conf` is

```ini
[storage-policy:2]
name = deepfreeze7-6
aliases = df76
policy_type = erasure_coding
ec_type = isa_l_rs_vand
ec_num_data_fragments = 7
ec_num_parity_fragments = 6
ec_object_segment_size = 1048576
```

For deeper context and background, Swift's upstream erasure code docs are at



## What's about to happen?

After  lands, Swift services
won't start if you have an EC storage policy with `isa_l_rs_vand` and
5 or more parity bits unless that policy is deprecated. No new containers
with this policy will be able to be created. Existing objects will be
readable and you can still write to containers previously created with
this storage policy.

This proposed patch is expected to be included in Swift's next 2.15.0
release. The OpenStack Pike release will include either Swift 2.15.x
or Swift 2.16.x.


## Why now?

Although Swift and liberasurecode will no longer actively corrupt
data, it's still possible that some failures will result in an
inability to reconstruct missing erasure code fragments to restore
full durability. Operators should immediately cease using
`isa_l_rs_vand` with 5 or more parity bits, and migrate all data
stored in a policy like that to a different storage policy. Since data
movement takes time, this process should be started as soon as
possible.


## What do ops need to do right now?

1. Ensure that you are using liberasurecode 1.4.0 or later.
2. Identify any storage policies using `isa_l_rs_vand` with 5 or more
   parity bits
3. For each policy found, deprecate the storage policy.
4. Operators should change the name of the bad policy to reflect its
   deprecated state. After renaming this policy, an alias can be created on
   the new policy that matches the name of the old policy. This will provide
   continuity for client apps. See below for an example.
5. If you need to keep an erasure code policy with the same data/parity
   balance, create a new one using `isa_l_rs_cauchy` (this requires
   liberasurecode 1.4.0 or later). Note that a new storage policy must have a
   unique name.
6. Begin migrating existing data from the deprecated policy to a different
   storage policy. Depending on the amount of data stored, this may take a
   long time. At this time, there are no upstream tools to facilitate this,
   but the process is a matter of GET'ing the data from the existing container
   (with the deprecated policy) and PUT'ing the data to the new container
   (using the new policy).

One way to migrate the data to a new policy is to use Swift's
container sync feature. Using container sync will preserve object
metadata and timestamps. Another option is to write a tool to iterate
over existing data and send COPY request to copy data to a new
contaienr. The advantage of this second option is that it's cheap to
get started and doesn't require changing anything on the server side.

Note that when objects move from one container to another, though,
their URLs will change.

**Caution**

A deprecated policy cannot also be the default policy. Therefore if
your default policy uses `isa_l_rs_vand` and 5 or more parity bits,
you will need to configure a different default policy before
deprecating the policy with the bad config. That may mean adding
another storage policy to act as the default, or making another
existing policy the default.


## Examples

Good config after doing the above steps:

```ini
# this policy is deprecated and replaced by storage policy 3
[storage-policy:2]
name = deepfreeze7-6-deprecated
policy_type = 

Re: [openstack-dev] [nova][scheduler][placement] Allocating Complex Resources

2017-06-12 Thread Edward Leafe
On Jun 12, 2017, at 10:20 AM, Jay Pipes  wrote:

>> The RP uuid is part of the provider: the compute node's uuid, and (after 
>> https://review.openstack.org/#/c/469147/ merges) the PCI device's uuid. So 
>> in the code that passes the PCI device information to the scheduler, we 
>> could add that new uuid field, and then the scheduler would have the 
>> information to a) select the best fit and then b) claim it with the specific 
>> uuid. Same for all the other nested/shared devices.
> 
> How would the scheduler know that a particular SRIOV PF resource provider 
> UUID is on a particular compute node unless the placement API returns 
> information indicating that SRIOV PF is a child of a particular compute node 
> resource provider?


Because PCI devices are per compute node. The HostState object populates itself 
from the compute node here:

https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L224-L225
 


If we add the UUID information to the PCI device, as the above-mentioned patch 
proposes, when the scheduler selects a particular compute node that has the 
device, it uses the PCI device’s UUID. I thought that having that information 
in the scheduler was what that patch was all about.

-- Ed Leafe





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Goodbye^W See you later

2017-06-12 Thread John Villalovos
Just getting back from vacation and I read this :(

So sorry to see you go. Thank you for all your help over the years with
Ironic and helping me with Ironic!

Best of luck with your new work. Your new employer is very lucky!

John

On Thu, Jun 8, 2017 at 5:45 AM, Jim Rollenhagen 
wrote:

> Hey friends,
>
> I've been mostly missing for the past six weeks while looking for a new
> job, so maybe you've forgotten me already, maybe not. I'm happy to tell you
> I've found one that I think is a great opportunity for me. But, I'm sad to
> tell you that it's totally outside of the OpenStack community.
>
> The last 3.5 years have been amazing. I'm extremely grateful that I've
> been able to work in this community - I've learned so much and met so many
> awesome people. I'm going to miss the insane(ly awesome) level of
> collaboration, the summits, the PTGs, and even some of the bikeshedding.
> We've built amazing things together, and I'm sure y'all will continue to do
> so without me.
>
> I'll still be lurking in #openstack-dev and #openstack-ironic for a while,
> if people need me to drop a -2 or dictate old knowledge or whatever, feel
> free to ping me. Or if you just want to chat. :)
>
> <3 jroll
>
> P.S. obviously my core permissions should be dropped now :P
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-12 Thread Fox, Kevin M
"Otherwise, -onetime will need to launch new containers each config change." 
You say that like its a bad thing

That sounds like a good feature to me. atomic containers. You always know the 
state of the system. As an Operator, I want to know which containers have the 
new config, which have the old, and which are stuck transitioning so I can fix 
brokenness. If its all hidden inside the containers, its much harder to Operate.

Thanks,
Kevin

From: Paul Belanger [pabelan...@redhat.com]
Sent: Friday, June 09, 2017 10:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] 
[helm] Configuration management with etcd / confd

On Fri, Jun 09, 2017 at 04:52:25PM +, Flavio Percoco wrote:
> On Fri, Jun 9, 2017 at 11:30 AM Britt Houser (bhouser) 
> wrote:
>
> > How does confd run inside the container?  Does this mean we’d need some
> > kind of systemd in every container which would spawn both confd and the
> > real service?  That seems like a very large architectural change.  But
> > maybe I’m misunderstanding it.
> >
> >
> Copying part of my reply to Doug's email:
>
> 1. Run confd + openstack service in side the container. My concern in this
> case
> would be that we'd have to run 2 services inside the container and structure
> things in a way we can monitor both services and make sure they are both
> running. Nothing impossible but one more thing to do.
>
> 2. Run confd `-onetime` and then run the openstack service.
>
>
> I either case, we could run confd as part of the entrypoint and have it run
> in
> background for the case #1 or just run it sequentially for case #2.
>
Both approached are valid, it all depends on your use case.  I suspect in the
case of openstack, you'll be running 2 daemons in your containers. Otherwise,
-onetime will need to launch new containers each config change.

>
> > Thx,
> > britt
> >
> > On 6/9/17, 9:04 AM, "Doug Hellmann"  wrote:
> >
> > Excerpts from Flavio Percoco's message of 2017-06-08 22:28:05 +:
> >
> > > Unless I'm missing something, to use confd with an OpenStack
> > deployment on
> > > k8s, we'll have to do something like this:
> > >
> > > * Deploy confd in every node where we may want to run a pod
> > (basically
> > > wvery node)
> >
> > Oh, no, no. That's not how it works at all.
> >
> > confd runs *inside* the containers. It's input files and command line
> > arguments tell it how to watch for the settings to be used just for
> > that
> > one container instance. It does all of its work (reading templates,
> > watching settings, HUPing services, etc.) from inside the container.
> >
> > The only inputs confd needs from outside of the container are the
> > connection information to get to etcd. Everything else can be put
> > in the system package for the application.
> >
> > Doug
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pike m2 has been released

2017-06-12 Thread Ben Nemec



On 06/09/2017 05:39 PM, Emilien Macchi wrote:

On Fri, Jun 9, 2017 at 5:01 PM, Ben Nemec  wrote:

Hmm, I was expecting an instack-undercloud release as part of m2.  Is there
a reason we didn't do that?


You just released a new tag: https://review.openstack.org/#/c/471066/
with a new release model, why would we release m2? In case you want
it, I think we can still do it on Monday.


It was a new tag, but the same commit as m1 so it isn't really a new 
release, just a re-tag of the same release we already had.  Part of my 
reasoning for doing that was that it would get a new release for m2.





On 06/08/2017 03:47 PM, Emilien Macchi wrote:


We have a new release of TripleO, pike milestone 2.
All bugs targeted on Pike-2 have been moved into Pike-3.

I'll take care of moving the blueprints into Pike-3.

Some numbers:
Blueprints: 3 Unknown, 18 Not started, 14 Started, 3 Slow progress, 11
Good progress, 9 Needs Code Review, 7 Implemented
Bugs: 197 Fix Released

Thanks everyone!



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Making stack outputs static

2017-06-12 Thread Ben Nemec



On 06/09/2017 03:10 PM, Zane Bitter wrote:

History lesson: a long, long time ago we made a very big mistake. We
treated stack outputs as things that would be resolved dynamically when
you requested them, instead of having values fixed at the time the
template was created or updated. This makes performance of reading
outputs slow, especially for e.g. large stacks, because it requires
making ReST calls, and it can result in inconsistencies between Heat's
internal model of the world and what it actually outputs.

As unfortunate as this is, it's difficult to change the behaviour and be
certain that no existing users will get broken. For that reason, this
issue has never been addressed. Now is the time to address it.

Here's the tracker bug: https://bugs.launchpad.net/heat/+bug/1660831

It turns out that the correct fix is to store the attributes of a
resource in the DB - this accounts for the fact that outputs may contain
attributes of multiple resources, and that these resources might get
updated at different times. It also solves a related consistency issue,
which is that during a stack update a resource that is not updated may
nevertheless report new attribute values, and thus cause things
downstream to be updated, or to fail, unexpectedly (e.g.
https://bugzilla.redhat.com/show_bug.cgi?id=1430753#c13).

The proposal[1] is to make this change in Pike for convergence stacks
only. This is to allow some warning for existing users who might be
relying on the current behaviour - at least if they control their own
cloud then they can opt to keep convergence disabled, and even once they
opt to enable it for new stacks they can keep using existing stacks in
legacy mode until they are ready to convert them to convergence or
replace them. In addition, it avoids the difficulty of trying to get
consistency out of the legacy path's crazy backup-stack shenanigans -
there's basically no way to get the outputs to behave in exactly the
same way in the legacy path as they will in convergence.

This topic was raised at the Forum, and there was some feedback that:

1) There are users who require the old behaviour even after they move to
convergence.
2) Specifically, there are users who don't have public API endpoints for
services other than Heat, and who rely on Heat proxying requests to
other services to get any information at all about their resources o.O
3) There are users still using the legacy path (*cough*TripleO) that
want the performance benefits of quick output resolution.

The suggestion is that instead of tying the change to the convergence
flag, we should make it configurable by the user on a per-stack basis.

I am vehemently opposed to this suggestion.

It's a total cop-out to make the user decide. The existing behaviour is
clearly buggy and inconsistent. Users are not, and should not have to
be, sufficiently steeped in the inner workings of Heat to be able to
decide whether and when to subject themselves to random inconsistencies
and hope for the best. If we make the change the default then we'll
still break people, and if we don't we'll still be saying "OMG, you
forgot to enable the --not-suck option??!" 10 years from now.

Instead, this is what I'm proposing as the solution to the above feedback:

1) The 'show' attribute of each resource will be marked CACHE_NONE[2].
This ensures that the live data is always available via this attribute.
2) When showing a resource's attributes via the API (as opposed to
referencing them from within a template), always return live values.[3]
Since we only store the attribute values that are actually referenced in
the template anyway, we more or less have to do this if we want the
attributes output through this API to be consistent with each other.
3) Move to convergence. Seriously, the memory and database usage are
much improved, and there are even more memory improvements in the
pipeline,[4] and they might even get merged in Pike as long as we don't
have to stop and reimplement the attribute storage patches that they
depend on. If TripleO were to move to convergence in Queens, which I
believe is 100% feasible, then it would get the performance improvements
at least as soon as it would if we tried to implement attribute storage
in the legacy path.


I think we wanted to move to convergence anyway so I don't see a problem 
with this.  I know there was some discussion about starting to test with 
convergence in tripleo-ci, does anyone know what, if anything, happened 
with that?




Is anyone still dissatisfied? Speak now or... you know the drill.

cheers,
Zane.

[1]
https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bug/1660831

[2] https://review.openstack.org/#/c/422983/33/heat/engine/resource.py
[3] https://review.openstack.org/472501
[4]
https://review.openstack.org/#/q/status:open+project:openstack/heat+topic:bp/stack-definition


__
OpenStack Development Mailing 

Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-12 Thread Fox, Kevin M
+1 for putting confd in a side car with shared namespaces. much more k8s native.

Still generally -1 on the approach of using confd instead of configmaps. You 
loose all the atomicity that k8s provides with deployments. It breaks 
upgrade/downgrade behavior.

Would it be possible to have confd run in k8s, generate the configmaps, and 
push them to k8s? That might be even more k8s native.

Thanks,
Kevin

From: Bogdan Dobrelya [bdobr...@redhat.com]
Sent: Monday, June 12, 2017 1:07 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] 
[helm] Configuration management with etcd / confd

On 09.06.2017 18:51, Flavio Percoco wrote:
>
>
> On Fri, Jun 9, 2017 at 8:07 AM Doug Hellmann  > wrote:
>
> Excerpts from Flavio Percoco's message of 2017-06-08 22:28:05 +:
>
> > Unless I'm missing something, to use confd with an OpenStack
> deployment on
> > k8s, we'll have to do something like this:
> >
> > * Deploy confd in every node where we may want to run a pod (basically
> > wvery node)
>
> Oh, no, no. That's not how it works at all.
>
> confd runs *inside* the containers. It's input files and command line
> arguments tell it how to watch for the settings to be used just for that
> one container instance. It does all of its work (reading templates,
> watching settings, HUPing services, etc.) from inside the container.
>
> The only inputs confd needs from outside of the container are the
> connection information to get to etcd. Everything else can be put
> in the system package for the application.
>
>
> A-ha, ok! I figured this was another option. In this case I guess we
> would have 2 options:
>
> 1. Run confd + openstack service in side the container. My concern in
> this case
> would be that we'd have to run 2 services inside the container and structure
> things in a way we can monitor both services and make sure they are both
> running. Nothing impossible but one more thing to do.
>
> 2. Run confd `-onetime` and then run the openstack service.
>

A sidecar confd container running in a shared pod, which is having a
shared PID namespace with the managed service, would look much more
containerish. So confd could still HUP the service or signal it to be
restarted w/o baking itself into the container image. We have to deal
with the Pod abstraction as we want to be prepared for future
integration with k8s.

>
> Either would work but #2 means we won't have config files monitored and the
> container would have to be restarted to update the config files.
>
> Thanks, Doug.
> Flavio
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-12 Thread Erno Kuvaja
Just quick comment here on my day off. I will get proper reply to the
chain itself tomorrow when I'm back.

While I cannot argue with all Mike's points below, I personally cannot
stand behind his proposals here and just want to indicate that this is
view of one member, not the Glance community by wide.

- Erno "Jokke" Kuvaja

On Mon, Jun 12, 2017 at 2:56 PM, Mikhail Fedosin  wrote:
> Hello!
>
> Flavio raised a very difficult and important question, and I think that we,
> as community members, should decide what to do with Glance next.
>
> I will try to state my subjective opinion. I was involved in the Glance
> project for almost three years and studied it fairly plank. I believe that
> the main problem is that the project was designed extremely poorly. Glance
> does not have many tasks to solve, but nevertheless, there are a lot of Java
> design patterns used (factory of factories, visitors, proxy and other things
> that are unnecessary in this case). All this leads to absolutely sad
> consequences, when in order to add an image tag over 180 objects of
> different classes are created, the code execution passes through more than
> 25 locations with a number of callbacks 3 times. So I can say that the code
> base is artificially over-complicated and incredibly inflated.
>
> The next problem is that over the years the code has grown by a number of
> workarounds, which make it difficult to implement new changes - any change
> leads to something breaking down somewhere else. In the long run, we get a
> lot of pain associated with race conditions, hard-to-recover heisenbugs and
> other horrors of programmer's life. It is difficult to talk about attracting
> new developers, because the developing of the code in such conditions is
> mentally exhausting.
>
> We can continue to deny the obvious, saying that Glance simply needs people
> and everything will be wonderful. But unfortunately this is not so - we
> should admit that it is simply not profitable to engage in further
> development. I suggest thinking about moving the current code base into a
> support mode and starting to develop an alternative (which I have been doing
> for the past year and a half).
>
> If you are allergic to the word "artifacts", do not read the following
> paragraph:
>
> We are actively developing the Glare project, which offers a universal
> catalog of various binary data along with its metadata - at the moment the
> catalog supports the storage of images of virtual machines and has feature
> parity with Glance. The service is used in production by Nokia, and it was
> thoroughly tested at various settings. Next week we plan to release the
> first stable version and begin the integration with various projects of
> OpenStack: Mistral and Vitrage in the first place.
>
> As a solution, I can propose to implement an additional API to Glare, which
> would correspond to OpenStack Image API v2 and test that OpenStack is able
> to work on its basis. After that, leave Glance at rest and start developing
> Glare as a universal catalog of binary data for OpenStack.
>
> Best,
> Mike
>
> On Fri, Jun 9, 2017 at 8:07 PM, Flavio Percoco  wrote:
>>
>> (sorry if duplicate, having troubles with email)
>>
>> Hi Team,
>>
>> I've been working a bit with the Glance team and trying to help where I
>> can and
>> I can't but be worried about the critical status of the Glance team.
>> Unfortunately, the number of participants in the Glance team has been
>> reduced a
>> lot resulting in the project not being able to keep up with the goals, the
>> reviews required, etc.[0]
>>
>> I've always said that Glance is one of those critical projects that not
>> many
>> people notice until it breaks. It's in every OpenStack cloud sitting in a
>> corner
>> and allowing for VMs to be booted. So, before things get even worse, I'd
>> like us to brainstorm a bit on what solutions/options we have now.
>>
>> I know Glance is not the only project "suffering" from lack of
>> contributors but
>> I don't want us to get to the point where there won't be contributors
>> left.
>>
>> How do people feel about adding Glance to the list of "help wanted" areas
>> of
>> interest?
>>
>> Would it be possible to get help w/ reviews from folks from teams like
>> nova/cinder/keystone? Any help is welcomed, of course, but I'm trying to
>> think
>> about teams that may be familiar with the Glance code/api already.
>>
>> Cheers,
>> Flavio
>>
>> [0] http://stackalytics.com/?module=glance-group=marks
>> [1] https://review.openstack.org/#/c/466684/
>>
>> --
>> @flaper87
>> Flavio Percoco
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> 

Re: [openstack-dev] [keystone][glance][nova][neutron][horizon][cinder][osc][swift][manila][telemetry][heat][ptls][all][tc][docs] Documentation migration spec

2017-06-12 Thread Doug Hellmann
I added subject tags for the projects most affected by this change. It
would be good to have the PTLs or liaisons from those teams review the
spec so there are no surprises when we start moving files around.

Excerpts from Alexandra Settle's message of 2017-06-08 15:17:34 +:
> Hi everyone,
> 
> Doug and I have written up a spec following on from the conversation [0] that 
> we had regarding the documentation publishing future.
> 
> Please take the time out of your day to review the spec as this affects 
> *everyone*.
> 
> See: https://review.openstack.org/#/c/472275/
> 
> I will be PTO from the 9th – 19th of June. If you have any pressing concerns, 
> please email me and I will get back to you as soon as I can, or, email Doug 
> Hellmann and hopefully he will be able to assist you.
> 
> Thanks,
> 
> Alex
> 
> [0] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117162.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Allocating Complex Resources

2017-06-12 Thread Jay Pipes

On 06/09/2017 06:31 PM, Ed Leafe wrote:

On Jun 9, 2017, at 4:35 PM, Jay Pipes  wrote:


We can declare that allocating for shared disk is fairly deterministic
if we assume that any given compute node is only associated with one
shared disk provider.


a) we can't assume that
b) a compute node could very well have both local disk and shared disk. how 
would the placement API know which one to pick? This is a sorting/weighing 
decision and thus is something the scheduler is responsible for.


I remember having this discussion, and we concluded that a compute node could 
either have local or shared resources, but not both. There would be a trait to 
indicate shared disk. Has this changed?


I'm not sure it's changed per-se :) It's just that there's nothing 
preventing this from happening. A compute node can theoretically have 
local disk and also be associated with a shared storage pool.



* We already have the information the filter scheduler needs now by
  some other means, right?  What are the reasons we don't want to
  use that anymore?


The filter scheduler has most of the information, yes. What it doesn't have is the 
*identifier* (UUID) for things like SRIOV PFs or NUMA cells that the Placement API will 
use to distinguish between things. In other words, the filter scheduler currently does 
things like unpack a NUMATopology object into memory and determine a NUMA cell to place 
an instance to. However, it has no concept that that NUMA cell is (or will soon be once 
nested-resource-providers is done) a resource provider in the placement API. Same for 
SRIOV PFs. Same for VGPUs. Same for FPGAs, etc. That's why we need to return information 
to the scheduler from the placement API that will allow the scheduler to understand 
"hey, this NUMA cell on compute node X is resource provider $UUID".


I guess that this was the point that confused me. The RP uuid is part of the 
provider: the compute node's uuid, and (after 
https://review.openstack.org/#/c/469147/ merges) the PCI device's uuid. So in 
the code that passes the PCI device information to the scheduler, we could add 
that new uuid field, and then the scheduler would have the information to a) 
select the best fit and then b) claim it with the specific uuid. Same for all 
the other nested/shared devices.


How would the scheduler know that a particular SRIOV PF resource 
provider UUID is on a particular compute node unless the placement API 
returns information indicating that SRIOV PF is a child of a particular 
compute node resource provider?



I don't mean to belabor this, but to my mind this seems a lot less disruptive 
to the existing code.


Belabor away :) I don't mind talking through the details. It's important 
to do.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer][opendaylight][networking-odl] OpenDaylight Driver for Ceilometer

2017-06-12 Thread gordon chung


On 12/06/17 04:25 AM, Deepthi V V wrote:
> Hi,
>
>
>
> We plan to propose a ceilometer driver for collecting network statistics
> information from OpenDaylight. We were thinking if we could have the
> driver code residing in networking-odl project instead of Ceilometer
> project. The thought is we have OpenDaylight depended code restricted to
> n-odl repo. Please let us know your thoughts on the same.
>

will this run as its own periodic service or do you need to leverage 
ceilometer polling framework?

ideally, all this code will exists outside for ceilometer and have 
ceilometer consume it. the ceilometer team is far from experts on ODL so 
i don't think you want us reviewing ODL code. we'll be glad to help with 
integration though.

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-12 Thread Bogdan Dobrelya
On 12.06.2017 14:02, Jiří Stránský wrote:
> On 9.6.2017 18:51, Flavio Percoco wrote:
>> A-ha, ok! I figured this was another option. In this case I guess we
>> would
>> have 2 options:
>>
>> 1. Run confd + openstack service in side the container. My concern in
>> this
>> case
>> would be that we'd have to run 2 services inside the container and
>> structure
>> things in a way we can monitor both services and make sure they are both
>> running. Nothing impossible but one more thing to do.
> 
> I see several cons with this option:
> 
> * Even if we do this in a sidecar container like Bogdan mentioned (which
> is better than running 2 "top-level" processes in a single container
> IMO), we still have to figure out when to restart the main service,
> IIUC. I see confd in daemon mode listens on the backend change and
> updates the conf files, but i can't find a mention that it would be able
> to restart services. Even if we implemented this auto-restarting in
> OpenStack services, we need to deal with services like MariaDB, Redis,
> ..., so additional wrappers might be needed to make this a generic
> solution.

AFAIK, confd can send a signal to the process, so actions to be taken
are up to the service, either to refresh from its configs [0] or just
exit to be restarted by the container manager (which is docker-daemon,
currently, in tripleo).

Speaking of (tripleo specific) HA services you've mentioned, let
pacemaker to handle it on its own, but the same way, based on signals
sent to services by confd. For example, a galera service instance may
exit on the signal from the confd sidecar, then picked up by the next
monitor action causing it to be restarted by pcmk resources managemend
logic.

[0] https://bugs.launchpad.net/oslo-incubator/+bug/1276694

> 
> * Assuming we've solved the above, if we push a config change to etcd,
> all services get restarted at roughly the same time, possibly creating
> downtime or capacity issues.
> 
> * It complicates the reasoning about container lifecycle, as we have to
> start distinguishing between changes that don't require a new container
> (config change only) vs. changes which do require it (image content
> change). Mutable container config also hides this lifecycle from the
> operator -- the container changes on the inside without COE knowing
> about it, so any operator's queries to COE would look like no changes
> happened.
> 
> I think ideally container config would be immutable, and every time we
> want to change anything, we'd do that via a roll out of a new set of
> containers. This way we have a single way of making changes to reason
> about, and when we're doing rolling updates, it shouldn't result in a
> downtime or tangible performance drop. (Not talking about migrating to a
> new major OpenStack release, which will still remain a special case in
> foreseeable future.)
> 
>>
>> 2. Run confd `-onetime` and then run the openstack service.
> 
> This sounds simpler both in terms of reasoning and technical complexity,
> so if we go with confd, i'd lean towards this option. We'd have to
> rolling-replace the containers from outside, but that's what k8s can
> take care of, and at least the operator can see what's happening on high
> level.
> 
> The issues that Michał mentioned earlier still remain to be solved --
> config versioning ("accidentally" picking up latest config), and how to
> supply config elements that differ per host.
> 
> Also, it's probably worth diving a bit deeper into comparing `confd
> -onetime` and ConfigMaps...
> 
> 
> Jirka
> 
>>
>>
>> Either would work but #2 means we won't have config files monitored
>> and the
>> container would have to be restarted to update the config files.
>>
>> Thanks, Doug.
>> Flavio
>>
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-upstream-institute] Meeting reminder

2017-06-12 Thread Ildiko Vancsa
Hi Training Team,

This is a friendly reminder to have our first meeting in the new alternating 
time slot.

The meeting will take place tomorrow (Tuesday, June 13) at 0900 UTC on 
#openstack-meeting-3.

You can find the agenda here: 
https://etherpad.openstack.org/p/openstack-upstream-institute-meetings

See you tomorrow on IRC! :)

Thanks and Best Regards,
Ildikó



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-12 Thread Mikhail Fedosin
Hello!

Flavio raised a very difficult and important question, and I think that we,
as community members, should decide what to do with Glance next.

I will try to state my subjective opinion. I was involved in the Glance
project for almost three years and studied it fairly plank. I believe that
the main problem is that the project was designed extremely poorly. Glance
does not have many tasks to solve, but nevertheless, there are a lot of
Java design patterns used (factory of factories, visitors, proxy and other
things that are unnecessary in this case). All this leads to absolutely sad
consequences, when in order to add an image tag over 180 objects of
different classes are created, the code execution passes through more than
25 locations with a number of callbacks 3 times. So I can say that the code
base is artificially over-complicated and incredibly inflated.

The next problem is that over the years the code has grown by a number of
workarounds, which make it difficult to implement new changes - any change
leads to something breaking down somewhere else. In the long run, we get a
lot of pain associated with race conditions, hard-to-recover heisenbugs and
other horrors of programmer's life. It is difficult to talk about
attracting new developers, because the developing of the code in such
conditions is mentally exhausting.

We can continue to deny the obvious, saying that Glance simply needs people
and everything will be wonderful. But unfortunately this is not so - we
should admit that it is simply not profitable to engage in further
development. I suggest thinking about moving the current code base into a
support mode and starting to develop an alternative (which I have been
doing for the past year and a half).

If you are allergic to the word "artifacts", do not read the following
paragraph:

We are actively developing the Glare project, which offers a universal
catalog of various binary data along with its metadata - at the moment the
catalog supports the storage of images of virtual machines and has feature
parity with Glance. The service is used in production by Nokia, and it was
thoroughly tested at various settings. Next week we plan to release the
first stable version and begin the integration with various projects of
OpenStack: Mistral and Vitrage in the first place.

As a solution, I can propose to implement an additional API to Glare, which
would correspond to OpenStack Image API v2 and test that OpenStack is able
to work on its basis. After that, leave Glance at rest and start developing
Glare as a universal catalog of binary data for OpenStack.

Best,
Mike

On Fri, Jun 9, 2017 at 8:07 PM, Flavio Percoco  wrote:

> (sorry if duplicate, having troubles with email)
>
> Hi Team,
>
> I've been working a bit with the Glance team and trying to help where I
> can and
> I can't but be worried about the critical status of the Glance team.
> Unfortunately, the number of participants in the Glance team has been
> reduced a
> lot resulting in the project not being able to keep up with the goals, the
> reviews required, etc.[0]
>
> I've always said that Glance is one of those critical projects that not
> many
> people notice until it breaks. It's in every OpenStack cloud sitting in a
> corner
> and allowing for VMs to be booted. So, before things get even worse, I'd
> like us to brainstorm a bit on what solutions/options we have now.
>
> I know Glance is not the only project "suffering" from lack of
> contributors but
> I don't want us to get to the point where there won't be contributors left.
>
> How do people feel about adding Glance to the list of "help wanted" areas
> of
> interest?
>
> Would it be possible to get help w/ reviews from folks from teams like
> nova/cinder/keystone? Any help is welcomed, of course, but I'm trying to
> think
> about teams that may be familiar with the Glance code/api already.
>
> Cheers,
> Flavio
>
> [0] http://stackalytics.com/?module=glance-group=marks
> [1] https://review.openstack.org/#/c/466684/
>
> --
> @flaper87
> Flavio Percoco
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Status update, Jun 12

2017-06-12 Thread Thierry Carrez
Thierry Carrez wrote:
> == Open discussions ==
> 
> The situation on documenting the state of MySQL / postgresql has been
> unblocked using an ad-hoc IRC meeting[5], which resulted in Dirk Mueller
> producing a new revision of sdague's initial proposal, ready for more
> reviews:
> 
> * Declare plainly the current state of MySQL in OpenStack [5]
> 
> [5] https://review.openstack.org/#/c/427880/
> 
> John Garbutt revised his resolution on ensuring that decisions are
> globally inclusive, which is now up for review:
> 
> * Decisions should be globally inclusive [6]
> 
> [6] https://review.openstack.org/#/c/460946/
> 
> The discussion on Queens goals is making progress. The Gerrit review
> ends up being a good medium to refine a proposal into something
> acceptable and valid, but not so great to make the final selection of
> Queens goals (choosing a limited set between a number of valid
> proposals). As mentioned by cdent, we should wait a bit for the
> discussion about other goals to make progress before using RollCall
> votes, and use CodeReview votes to refine the proposals as needed:
> 
> * Discovery alignment, with two options: [7] [8]
> * Policy and docs in code [9]
> * Migrate off paste [10]
> * Continuing Python 3.5+ Support​ [11]
> 
> [7] https://review.openstack.org/#/c/468436/
> [8] https://review.openstack.org/#/c/468437/
> [9] https://review.openstack.org/#/c/469954/
> [10] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117747.html
> [11] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117746.html

Oops.

In that session I missed the recent update by cdent of the "Introduce
assert:supports-api-interoperability" proposal at
https://review.openstack.org/#/c/418010, which is also ready for review.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-12 Thread Flavio Percoco

On 12/06/17 09:13 -0400, Sean Dague wrote:

On 06/09/2017 01:07 PM, Flavio Percoco wrote:

Would it be possible to get help w/ reviews from folks from teams like
nova/cinder/keystone? Any help is welcomed, of course, but I'm trying to
think
about teams that may be familiar with the Glance code/api already.


I'm happy to help here, I just went through and poked at a few things.
It is going to be tough to make meaningful contributions there without
approve authority, especially given the normal trust building exercise
for core teams takes 3+ months. It might be useful to figure out if
there are a set of folks already in the community that the existing core
team would be happy to provisionally promote to help worth the current
patch backlog and get things flowing.


I think this is fine. I'd be happy to add you and a couple of other folks that
have some time to spend on thi to the core team. This until the core team is
healthier.

Brian has been sending emails with focus reviews/topics every week and I think
that would be useful especially for folks joining the team provisionally. That
sounds like a better way to invest time.

Not sure whether Brian will have time to keep doing this, perhaps Erno can take
this task on? Erno?
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] the driver composition and breaking changes to the supported interfaces

2017-06-12 Thread Dmitry Tantsur

Hi folks!

I want to raise something we haven't apparently thought about when working on 
the driver composition reform.


For example, an iRMC patch [0] replaces 'pxe' boot with 'irmc-pxe'. This is the 
correct thing to do in this case. They're extending the PXE boot, and need a new 
class and a new entrypoint. We can expect more changes like this coming.


However, this change is breaking for users. Imagine a node explicitly created 
with:

 openstack baremetal node create --driver irmc --boot-interface pxe

On upgrade to Pike, such nodes will break and will require manual intervention 
to get it working again:


 openstack baremetal node set  --boot-interface irmc-pxe

What can we do about it? I see the following possibilities:

1. Keep "pxe" interface supported and issue a deprecation. This is relatively 
easy, but I'm not sure if it's always possible to keep the old interface working.


2. Change the driver composition reform to somehow allow the same names for 
different interfaces. e.g. "pxe" would point to PXEBoot for IPMI, but to 
IRMCPXEBoot for iRMC. This is technically challenging.


3. Only do a release note, and allow the breaking change to happen.

WDYT?

[0] https://review.openstack.org/#/c/416403

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Stepping down from core

2017-06-12 Thread Bhor, Dinesh
Good luck with your new position Roman. You were always helpful and friendly.

Thanks,
Dinesh Bhor

-Original Message-
From: Roman Podoliaka [mailto:roman.podoli...@gmail.com] 
Sent: Sunday, June 11, 2017 8:03 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [oslo.db] Stepping down from core

Hi all,

I recently changed job and hasn't been able to devote as much time to oslo.db 
as it is expected from a core reviewer. I'm no longer working on OpenStack, so 
you won't see me around much.

Anyway, it's been an amazing experience to work with all of you! Best of luck! 
And see ya at various PyCon's around the world! ;)

Thanks,
Roman

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-12 Thread Flavio Percoco

On 12/06/17 10:07 +0200, Bogdan Dobrelya wrote:

On 09.06.2017 18:51, Flavio Percoco wrote:



On Fri, Jun 9, 2017 at 8:07 AM Doug Hellmann > wrote:

Excerpts from Flavio Percoco's message of 2017-06-08 22:28:05 +:

> Unless I'm missing something, to use confd with an OpenStack
deployment on
> k8s, we'll have to do something like this:
>
> * Deploy confd in every node where we may want to run a pod (basically
> wvery node)

Oh, no, no. That's not how it works at all.

confd runs *inside* the containers. It's input files and command line
arguments tell it how to watch for the settings to be used just for that
one container instance. It does all of its work (reading templates,
watching settings, HUPing services, etc.) from inside the container.

The only inputs confd needs from outside of the container are the
connection information to get to etcd. Everything else can be put
in the system package for the application.


A-ha, ok! I figured this was another option. In this case I guess we
would have 2 options:

1. Run confd + openstack service in side the container. My concern in
this case
would be that we'd have to run 2 services inside the container and structure
things in a way we can monitor both services and make sure they are both
running. Nothing impossible but one more thing to do.

2. Run confd `-onetime` and then run the openstack service.



A sidecar confd container running in a shared pod, which is having a
shared PID namespace with the managed service, would look much more
containerish. So confd could still HUP the service or signal it to be
restarted w/o baking itself into the container image. We have to deal
with the Pod abstraction as we want to be prepared for future
integration with k8s.


Yeah, this might work too. I was just trying to think of options that were
generic enough. In an k8s scenario, this should do the job.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-06-12 Thread Thierry Carrez
Telles Nobrega wrote:
> First of all, thanks for putting this up and the organization looks
> good. I just want to remove that ? from Sahara on Friday. 
> We discussed and we believe that Wednesday and Thursday will be suffice.

Thanks Telles! I updated the high-level schedule.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-12 Thread Sean Dague
On 06/09/2017 01:07 PM, Flavio Percoco wrote:
> (sorry if duplicate, having troubles with email)
> 
> Hi Team,
> 
> I've been working a bit with the Glance team and trying to help where I
> can and
> I can't but be worried about the critical status of the Glance team.
> Unfortunately, the number of participants in the Glance team has been
> reduced a
> lot resulting in the project not being able to keep up with the goals, the
> reviews required, etc.[0]
> 
> I've always said that Glance is one of those critical projects that not many
> people notice until it breaks. It's in every OpenStack cloud sitting in
> a corner
> and allowing for VMs to be booted. So, before things get even worse, I'd
> like us to brainstorm a bit on what solutions/options we have now.
> 
> I know Glance is not the only project "suffering" from lack of
> contributors but
> I don't want us to get to the point where there won't be contributors left.
> 
> How do people feel about adding Glance to the list of "help wanted" areas of
> interest?
> 
> Would it be possible to get help w/ reviews from folks from teams like
> nova/cinder/keystone? Any help is welcomed, of course, but I'm trying to
> think
> about teams that may be familiar with the Glance code/api already.

I'm happy to help here, I just went through and poked at a few things.
It is going to be tough to make meaningful contributions there without
approve authority, especially given the normal trust building exercise
for core teams takes 3+ months. It might be useful to figure out if
there are a set of folks already in the community that the existing core
team would be happy to provisionally promote to help worth the current
patch backlog and get things flowing.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [meteos] Meteos Weekly Meeting

2017-06-12 Thread Hiroyuki

Hi Meteos Team,


I would like to skip weekly meeting at tomorrow.

I am little bit busy now. I have not registered a spec about tensorflow 
implementation yet.

Let's discuss tensorflow implementation after we have finished spec.

And I would like discuss CFP(call for presentation) of upcoming 
Sydney summit at next weekly meeting.


thanks,

Hiroyuki

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Stepping down from core

2017-06-12 Thread Matt Riedemann

On 6/11/2017 9:32 AM, Roman Podoliaka wrote:

Hi all,

I recently changed job and hasn't been able to devote as much time to
oslo.db as it is expected from a core reviewer. I'm no longer working
on OpenStack, so you won't see me around much.

Anyway, it's been an amazing experience to work with all of you! Best
of luck! And see ya at various PyCon's around the world! ;)

Thanks,
Roman

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Good luck with the new position Roman. You've always been a great help 
not only in Oslo land but also helping us out in Nova. You'll be missed.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][requirements][all] removing global pins for linters

2017-06-12 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2017-06-12 07:09:12 -0400:
> On 06/11/2017 11:30 AM, Doug Hellmann wrote:
> > The recent thread about updating the pylint version in
> > global-requirements.txt raised an issue because it was trying to
> > update the pylint version for all projects using it, but some teams
> > were not ready for the new tests in the latest version. I thought
> > we had dealt with that kind of case in the past by treating linter
> > projects like pylint and flake8 as special cases, and leaving them
> > out of the global requirements list. The requirements repo has a
> > separate file (blacklist.txt) for projects that should not be synced
> > into repositories and tested against the global-requirements.txt
> > list, and pylint is included there along with several other linter
> > tools.
> > 
> > I'm not sure why the linters were also being added to
> > global-requirements.txt, but it seems like a mistake. I have proposed
> > a patch [2] to remove them, which should allow projects that want
> > to update pylint to do so while not forcing everyone to update at
> > the same time. If we find issues with the requirements sync after
> > removing the entries from the global list, we should fix the syncing
> > scripts so we can keep the linters blacklisted.
> > 
> > Doug
> > 
> > [1] http://lists.openstack.org/pipermail/openstack-dev/2017-June/118085.html
> > [2] https://review.openstack.org/473094
> 
> Are you sure that works as expected? My understanding is that the
> requirements enforcement jobs only let you set requirements lines to
> what are in that file. So that effectively prevents anyone from changing
> the linters lines ever (see
> http://logs.openstack.org/69/473369/1/check/gate-nova-requirements/b425844/console.html)
> 
> -Sean
> 

Thanks. https://review.openstack.org/473402 should take care of that.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][glance] Glance needs help, it's getting critical

2017-06-12 Thread Thierry Carrez
Flavio Percoco wrote:
> I've been working a bit with the Glance team and trying to help where I
> can and I can't but be worried about the critical status of the Glance team.
> Unfortunately, the number of participants in the Glance team has been
> reduced a lot resulting in the project not being able to keep up with the 
> goals, the
> reviews required, etc.[0]

Are there more specific areas where the Glance team struggles ?

> I've always said that Glance is one of those critical projects that not many
> people notice until it breaks. It's in every OpenStack cloud sitting in
> a corner and allowing for VMs to be booted. So, before things get even worse, 
> I'd
> like us to brainstorm a bit on what solutions/options we have now.
> 
> I know Glance is not the only project "suffering" from lack of contributors 
> but
> I don't want us to get to the point where there won't be contributors left.
> 
> How do people feel about adding Glance to the list of "help wanted" areas of
> interest?

I think that makes sense. Glance (like Keystone, Cinder or Neutron) is a
project that other teams depend on.

> Would it be possible to get help w/ reviews from folks from teams like
> nova/cinder/keystone? Any help is welcomed, of course, but I'm trying to
> think about teams that may be familiar with the Glance code/api already.

I was hoping the VM & BM working group would be the right place to
discuss those priorities and make sure the minimal work is covered.
What's the status of that initiative ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-06-12 Thread Telles Nobrega
First of all, thanks for putting this up and the organization looks good. I
just want to remove that ? from Sahara on Friday.
We discussed and we believe that Wednesday and Thursday will be suffice.

Best regards,

On Fri, Jun 2, 2017 at 6:29 AM Emilien Macchi  wrote:

> On Thu, Jun 1, 2017 at 4:38 PM, Thierry Carrez 
> wrote:
> > Thierry Carrez wrote:
> >> In a previous thread[1] I introduced the idea of moving the PTG from a
> >> purely horizontal/vertical week split to a more
> >> inter-project/intra-project activities split, and the initial comments
> >> were positive.
> >>
> >> We need to solidify how the week will look like before we open up
> >> registration (first week of June), so that people can plan their
> >> attendance accordingly. Based on the currently-signed-up teams and
> >> projected room availability, I built a strawman proposal of how that
> >> could look:
> >>
> >>
> https://docs.google.com/spreadsheets/d/1xmOdT6uZ5XqViActr5sBOaz_mEgjKSCY7NEWcAEcT-A/pubhtml?gid=397241312=true
> >
> > OK, it looks like the feedback on this strawman proposal was generally
> > positive, so we'll move on with this.
> >
> > For teams that are placed on the Wednesday-Friday segment, please let us
> > know whether you'd like to make use of the room on Friday (pick between
> > 2 days or 3 days). Note that it's not a problem if you do (we have space
> > booked all through Friday) and this can avoid people leaving too early
> > on Thursday afternoon. We just need to know how many rooms we might be
> > able to free up early.
>
> For TripleO, Friday would be good (at least the morning) but I also
> think 2 days would be enough in case we don't have enough space.
>
> - So let's book Wednesday / Thursday / Friday.
> - We probably won't have anything on Friday afternoon, since I expect
> people traveling usually at this time.
> - If not enough room, no worries, we can have Wednesday / Thursday
> only, we'll survive for sure.
>
> Thanks,
>
> > In the same vein, if your team (or workgroup, or inter-project goal) is
> > not yet listed and you'd like to have a room in Denver, let us know ASAP.
> >
> > --
> > Thierry Carrez (ttx)
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 

TELLES NOBREGA

SOFTWARE ENGINEER

Red Hat I 

tenob...@redhat.com

TRIED. TESTED. TRUSTED. 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-12 Thread Jiří Stránský

On 9.6.2017 18:51, Flavio Percoco wrote:

A-ha, ok! I figured this was another option. In this case I guess we would
have 2 options:

1. Run confd + openstack service in side the container. My concern in this
case
would be that we'd have to run 2 services inside the container and structure
things in a way we can monitor both services and make sure they are both
running. Nothing impossible but one more thing to do.


I see several cons with this option:

* Even if we do this in a sidecar container like Bogdan mentioned (which 
is better than running 2 "top-level" processes in a single container 
IMO), we still have to figure out when to restart the main service, 
IIUC. I see confd in daemon mode listens on the backend change and 
updates the conf files, but i can't find a mention that it would be able 
to restart services. Even if we implemented this auto-restarting in 
OpenStack services, we need to deal with services like MariaDB, Redis, 
..., so additional wrappers might be needed to make this a generic solution.


* Assuming we've solved the above, if we push a config change to etcd, 
all services get restarted at roughly the same time, possibly creating 
downtime or capacity issues.


* It complicates the reasoning about container lifecycle, as we have to 
start distinguishing between changes that don't require a new container 
(config change only) vs. changes which do require it (image content 
change). Mutable container config also hides this lifecycle from the 
operator -- the container changes on the inside without COE knowing 
about it, so any operator's queries to COE would look like no changes 
happened.


I think ideally container config would be immutable, and every time we 
want to change anything, we'd do that via a roll out of a new set of 
containers. This way we have a single way of making changes to reason 
about, and when we're doing rolling updates, it shouldn't result in a 
downtime or tangible performance drop. (Not talking about migrating to a 
new major OpenStack release, which will still remain a special case in 
foreseeable future.)




2. Run confd `-onetime` and then run the openstack service.


This sounds simpler both in terms of reasoning and technical complexity, 
so if we go with confd, i'd lean towards this option. We'd have to 
rolling-replace the containers from outside, but that's what k8s can 
take care of, and at least the operator can see what's happening on high 
level.


The issues that Michał mentioned earlier still remain to be solved -- 
config versioning ("accidentally" picking up latest config), and how to 
supply config elements that differ per host.


Also, it's probably worth diving a bit deeper into comparing `confd 
-onetime` and ConfigMaps...



Jirka




Either would work but #2 means we won't have config files monitored and the
container would have to be restarted to update the config files.

Thanks, Doug.
Flavio



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is Routes==2.3.1 a binary only package or something?

2017-06-12 Thread Chris Smart
On Mon, 12 Jun 2017, at 21:36, Michael Still wrote:
> The experimental buildroot based ironic python agent bans all binaries, I
> am not 100% sure why. Chris is the guy there.
> 

Buildroot ironic python agent forces a build of all the
ironic-python-agent dependencies (as per requirements and constraints)
with no-binary :all:,  then builds ironic-python-agent wheel from the
git clone, then it can just install them all from local compiled wheels
into the target.[1]

IIRC this was to make sure that the wheels matched the target. It could
be all done wrong though.

[1]
https://github.com/csmart/ipa-buildroot/blob/master/buildroot-ipa/board/openstack/ipa/post-build.sh#L113

-c

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is Routes==2.3.1 a binary only package or something?

2017-06-12 Thread Michael Still
The experimental buildroot based ironic python agent bans all binaries, I
am not 100% sure why. Chris is the guy there.

I'm using that ipa as neither the coreos or tinyipa versions support the
broadcom nic in this here ibm x3550.

Michael

On 12 Jun 2017 8:56 PM, "Sean Dague"  wrote:

> On 06/12/2017 04:29 AM, Michael Still wrote:
> > Hi,
> >
> > I'm trying to explain this behaviour in stable/newton, which specifies
> > Routes==2.3.1 in upper-constraints:
> >
> > $ pip install --no-binary :all: Routes==2.3.1
> > ...
> >   Could not find a version that satisfies the requirement Routes==2.3.1
> > (from versions: 1.5, 1.5.1, 1.5.2, 1.6, 1.6.1, 1.6.2, 1.6.3, 1.7, 1.7.1,
> > 1.7.2, 1.7.3, 1.8, 1.9, 1.9.1, 1.9.2, 1.10, 1.10.1, 1.10.2, 1.10.3,
> > 1.11, 1.12, 1.12.1, 1.12.3, 1.13, 2.0, 2.1, 2.2, 2.3, 2.4.1)
> > Cleaning up...
> > No matching distribution found for Routes==2.3.1
> >
> > There is definitely a 2.3.1 on pip:
> >
> > $ pip install Routes==2.3.1
> > ...
> > Successfully installed Routes-2.3.1 repoze.lru-0.6 six-1.10.0
> >
> > This implies to me that perhaps Routes version 2.3.1 is a binary-only
> > release and that stable/newton is therefore broken for people who don't
> > like binary packages (in my case because they're building an install
> > image for an architecture which doesn't match their host architecture).
> >
> > Am I confused? I'd love to be enlightened.
>
> Routes 2.3.1 appears to be any arch wheel. Is there a specific reason
> that's not going to work for you? (e.g. Routes-2.3.1-py2.py3-none-any.whl)
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Role updates

2017-06-12 Thread Jiri Tomasek



On 12.6.2017 10:55, Dmitry Tantsur wrote:

On 06/09/2017 05:24 PM, Alex Schultz wrote:

Hey folks,

I wanted to bring to your attention that we've merged the change[0] to
add a basic set of roles that can be combined to create your own
roles_data.yaml as needed.  With this change the roles_data.yaml and
roles_data_undercloud.yaml files in THT should not be changed by hand.
Instead if you have an update to a role, please update the appropriate
roles/*.yaml file. I have proposed a change[1] to THT with additional
tools to validate that the roles/*.yaml files are updated and that
there are no unaccounted for roles_data.yaml changes. Additionally
this change adds in a new tox target to assist in the generate of
these basic roles data files that we provide.

Ideally I would like to get rid of the roles_data.yaml and
roles_data_undercloud.yaml so that the end user doesn't have to
generate this file at all but that won't happen this cycle.  In the
mean time, additional documentation around how to work with roles has
been added to the roles README[2].


Hi, this is awesome! Do we expect more example roles to be added? E.g. 
I could add a role for a reference Ironic Conductor node.


Hi, thanks for doing great work in this and bringing up the topic!

I'd like to point out one problem which we've been dealing with for 
quite a while now which is TripleO UI and CLI interoperability. The main 
reason why we introduced Mistral 'TripleO' API is to consolidate the 
business logic to single place which will be used by all TripleO 
clients, so all will use the same codebase and not diverge. This has 
been established and agreed on quite a long time ago but it occurs that 
the problem of diverging the codebases still creeps in.


Main problem is that CLI (unlike all other clients) still tends to 
operate on local files rather than a deployment plan stored in Swift. 
Result is that new features which should be implemented in single place 
(tripleo-common - Mistral Actions/Worklflows) are implemented twice - in 
tripleoclient and (usually later for no real reason) in tripleo-common. 
Roles management is exact example. There is a great effort made to 
simplifying and managing Roles, but only by CLI, regarless of other 
clients need to do the same. This causes us having to maintain 2 
codebases which have the same goal, increases development time and other 
costs.


So my question is: How much effort would it be to change CLI workflow to 
operate on plan in Swift rather on local files? What are the pros and 
cons? How do we solve the problem of lacking features in tripleo-common?


Recently a changes in tripleo-common have been made which make 
operations on Swift plan much simpler. All the data about deployment is 
kept in Swift in templates/environment files and plan-environment.yaml 
(which replaced mistral environment data structure) so 
importing/exporting plan is much simpler now. If CLI leveraged this 
functionality, there would not be any need for user to store CLI command 
which was used for deployment. All the data are in plan-environment.yaml.


Let's take a look at Roles management example. Alex mentions removing 
roles_data.yaml. Yes, there is no need for it. Deployment plan is 
pre-created with undercloud install already, so CLI user could list 
available roles and use command which sets roles (takes list of roles 
names), this calls Mistral action/workflow which stores this selection 
in plan-environment.yaml in Swift and regenerates/updates j2 templates. 
Same with anything else (add environment files add/modify templates, set 
parameters...). Then user just fires 'openstack overcloud deploy' and is 
done. In case of need, user can simply export the plan and keep the 
files locally to easily recreate same deployment elsewhere.


What are the reasons why CLI could not work this way? Do those outweigh 
having to implement and maintain the business logic at two places?


Thanks,
Jirka





Thanks,
-Alex

[0] https://review.openstack.org/#/c/445687/
[1] https://review.openstack.org/#/c/472731/
[2] 
https://github.com/openstack/tripleo-heat-templates/blob/master/roles/README.rst


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][requirements][all] removing global pins for linters

2017-06-12 Thread Sean Dague
On 06/11/2017 11:30 AM, Doug Hellmann wrote:
> The recent thread about updating the pylint version in
> global-requirements.txt raised an issue because it was trying to
> update the pylint version for all projects using it, but some teams
> were not ready for the new tests in the latest version. I thought
> we had dealt with that kind of case in the past by treating linter
> projects like pylint and flake8 as special cases, and leaving them
> out of the global requirements list. The requirements repo has a
> separate file (blacklist.txt) for projects that should not be synced
> into repositories and tested against the global-requirements.txt
> list, and pylint is included there along with several other linter
> tools.
> 
> I'm not sure why the linters were also being added to
> global-requirements.txt, but it seems like a mistake. I have proposed
> a patch [2] to remove them, which should allow projects that want
> to update pylint to do so while not forcing everyone to update at
> the same time. If we find issues with the requirements sync after
> removing the entries from the global list, we should fix the syncing
> scripts so we can keep the linters blacklisted.
> 
> Doug
> 
> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-June/118085.html
> [2] https://review.openstack.org/473094

Are you sure that works as expected? My understanding is that the
requirements enforcement jobs only let you set requirements lines to
what are in that file. So that effectively prevents anyone from changing
the linters lines ever (see
http://logs.openstack.org/69/473369/1/check/gate-nova-requirements/b425844/console.html)

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is Routes==2.3.1 a binary only package or something?

2017-06-12 Thread Sean Dague
On 06/12/2017 04:29 AM, Michael Still wrote:
> Hi,
> 
> I'm trying to explain this behaviour in stable/newton, which specifies
> Routes==2.3.1 in upper-constraints:
> 
> $ pip install --no-binary :all: Routes==2.3.1
> ...
>   Could not find a version that satisfies the requirement Routes==2.3.1
> (from versions: 1.5, 1.5.1, 1.5.2, 1.6, 1.6.1, 1.6.2, 1.6.3, 1.7, 1.7.1,
> 1.7.2, 1.7.3, 1.8, 1.9, 1.9.1, 1.9.2, 1.10, 1.10.1, 1.10.2, 1.10.3,
> 1.11, 1.12, 1.12.1, 1.12.3, 1.13, 2.0, 2.1, 2.2, 2.3, 2.4.1)
> Cleaning up...
> No matching distribution found for Routes==2.3.1
> 
> There is definitely a 2.3.1 on pip:
> 
> $ pip install Routes==2.3.1
> ...
> Successfully installed Routes-2.3.1 repoze.lru-0.6 six-1.10.0
> 
> This implies to me that perhaps Routes version 2.3.1 is a binary-only
> release and that stable/newton is therefore broken for people who don't
> like binary packages (in my case because they're building an install
> image for an architecture which doesn't match their host architecture).
> 
> Am I confused? I'd love to be enlightened.

Routes 2.3.1 appears to be any arch wheel. Is there a specific reason
that's not going to work for you? (e.g. Routes-2.3.1-py2.py3-none-any.whl)

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet][tripleo] Add ganesha puppet module

2017-06-12 Thread Jan Provaznik

Hi,
we would like to use nfs-ganesha for accessing shares on ceph storage 
cluster[1]. There is not yet a puppet module which would install and 
configure nfs-ganesha service. So to be able to set up nfs-ganesha with 
TripleO, I'd like to create a new ganesha puppet module under 
openstack-puppet umbrella unless there is a disagreement?


Thanks, Jan

[1] https://blueprints.launchpad.net/tripleo/+spec/nfs-ganesha

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is Routes==2.3.1 a binary only package or something?

2017-06-12 Thread Chris Smart
On Mon, 12 Jun 2017, at 18:29, Michael Still wrote:
> Hi,
> 



> This implies to me that perhaps Routes version 2.3.1 is a binary-only
> release and that stable/newton is therefore broken for people who don't
> like binary packages (in my case because they're building an install
> image
> for an architecture which doesn't match their host architecture).
> 

Yes, I think you're correct - there doesn't seem to be a source tarball
for 2.3.1:

https://pypi.python.org/simple/routes/

Pip does find version 2.3:
$ pip install --no-binary :all: Routes==2.3

Collecting Routes==2.3
  Downloading Routes-2.3.tar.gz (181kB)
100%  184kB 3.1MB/s 
Requirement already satisfied (use --upgrade to upgrade): six in
/usr/lib/python2.7/site-packages (from Routes==2.3)
Collecting repoze.lru>=0.3 (from Routes==2.3)
  Downloading repoze.lru-0.6.tar.gz
Installing collected packages: repoze.lru, Routes
  Running setup.py install for repoze.lru ... done
  Running setup.py install for Routes ... done
Successfully installed Routes-2.3 repoze.lru-0.6

Also, AFAICT 2.3.1 was just a single patch over 2.3 for compatibility,
so if you don't need that then you could just stick with 2.3.

-c

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] notification update week 24

2017-06-12 Thread Balazs Gibizer

Hi,

Here is the status update / focus setting mail about notification work 
for week 24.


Bugs

[Medium] https://bugs.launchpad.net/nova/+bug/1657428 The instance 
notifications are sent with inconsistent timestamp format. One of the 
prerequisite patches needs some discussion 
https://review.openstack.org/#/q/topic:bug/1657428


[New] https://bugs.launchpad.net/nova/+bug/1684860 Versioned server 
notifications don't include updated_at
We missed the update_at field during the transformation of the instance 
notifications. This is pretty easy to fix so I marked as low-hanging.


[Low] https://bugs.launchpad.net/nova/+bug/1696152 nova notifications 
use nova-api as binary name instead of nova-osapi_compute
Agreed not to change the binary name in the notifications. Instead we 
make an enum for that name to show that the name is intentional.



Versioned notification transformation
-
Patches are still need core attention:
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/versioned-notification-transformation-pike+label:Code-Review%253E%253D%252B1+label:Verified%253E%253D1+AND+NOT+label:Code-Review%253C0


Searchlight integration
---
bp additional-notification-fields-for-searchlight
~
The first two patches in the series needs core attention the rest need 
some care by the author:

https://review.openstack.org/#/q/topic:bp/additional-notification-fields-for-searchlight+status:open


Small improvements
~~
* https://review.openstack.org/#/c/428199/ Improve assertJsonEqual
error reporting
* https://review.openstack.org/#/c/450787/ remove ugly local import

* https://review.openstack.org/#/q/topic:refactor-notification-samples
Factor out duplicated notification sample data
This is a start of a longer patch series to deduplicate notification
sample data. The third patch already shows how much sample data can be
deleted from nova tree. We added a minimal hand rolled json ref
implementation to notification sample test as the existing python json
ref implementations are not well maintained.


Weekly meeting
--
The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC
on openstack-meeting-4. The next meeting will be held on 13th of June.
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170613T17

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Status update, Jun 12

2017-06-12 Thread Thierry Carrez
Hi!

A bit late due to travel, here is an update on the status of a number of
TC-proposed governance changes, in an attempt to rely less on a weekly
meeting to convey that information.

You can find the full status list of open topics at:
https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee


== Recently-approved changes ==

* Introducing TC office hours [1] [2] [3]
* Add etherpad link w/ detailed steps to split-tempest-plugins goal [4]
* New git repositories: keystone-tempest-plugins
* Pike goal responses for Telemetry, Shade, Swift

[1] https://review.openstack.org/#/c/467256/
[2] https://review.openstack.org/#/c/467386/
[3] https://review.openstack.org/#/c/468523/
[4] https://review.openstack.org/#/c/468972/

As a result of the "TC office hours" change merging, members of the TC
will make a best-effort attempt to attend the #openstack-tc channel for
office hours and discussions on the following times of the week:
09:00-10:00 UTC on Tuesdays, 01:00-02:00 UTC on Wednesdays, and
15:00-16:00 UTC on Thursdays.


== Open discussions ==

The situation on documenting the state of MySQL / postgresql has been
unblocked using an ad-hoc IRC meeting[5], which resulted in Dirk Mueller
producing a new revision of sdague's initial proposal, ready for more
reviews:

* Declare plainly the current state of MySQL in OpenStack [5]

[5] https://review.openstack.org/#/c/427880/

John Garbutt revised his resolution on ensuring that decisions are
globally inclusive, which is now up for review:

* Decisions should be globally inclusive [6]

[6] https://review.openstack.org/#/c/460946/

The discussion on Queens goals is making progress. The Gerrit review
ends up being a good medium to refine a proposal into something
acceptable and valid, but not so great to make the final selection of
Queens goals (choosing a limited set between a number of valid
proposals). As mentioned by cdent, we should wait a bit for the
discussion about other goals to make progress before using RollCall
votes, and use CodeReview votes to refine the proposals as needed:

* Discovery alignment, with two options: [7] [8]
* Policy and docs in code [9]
* Migrate off paste [10]
* Continuing Python 3.5+ Support​ [11]

[7] https://review.openstack.org/#/c/468436/
[8] https://review.openstack.org/#/c/468437/
[9] https://review.openstack.org/#/c/469954/
[10] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117747.html
[11] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117746.html


== Voting in progress ==

The Top 5 help wanted list (and the "doc owners" initial item) have a
new revision up to catch latest comments. Should be ready for voting now:

* Introduce Top 5 help wanted list [12]
* Add "Doc owners" to top-5 wanted list [13]

[12] https://review.openstack.org/#/c/466684/
[13] https://review.openstack.org/#/c/469115/

Doug's guidelines for managing releases of binary artifacts seem to be
ready for approval:

* Guidelines for managing releases of binary artifacts [14]

[14] https://review.openstack.org/#/c/469265/

Finally we have a follow-up patch on the office hours that further
refines what's expected of TC members:

* Follow-up precisions on office hours [15]

[15] https://review.openstack.org/#/c/470926/


== Tag assertion reviews ==

A number of teams want to assert specific tags. Those will be approved
after a week, unless someone objects:

* assert:supports-rolling-upgrade for keystone [16]
* assert:supports-upgrade to Barbican [17]

[16] https://review.openstack.org/471427
[17] https://review.openstack.org/472547

Additionally, the Kolla team has been applying for stable:follows-policy
for a while[18], but they are still waiting for a review by the Stable
maintenance team.

[18] https://review.openstack.org/#/c/346455/


== TC member actions for the coming week(s) ==

johnthetubaguy, cdent, dtroyer to continue distill TC vision feedback
into actionable points (and split between cosmetic and significant
changes) [https://review.openstack.org/453262]

johnthetubaguy to finalize updating "Describe what upstream support
means" with a new revision [https://review.openstack.org/440601]

flaper87 to update "Drop Technical Committee meetings" with a new
revision [https://review.openstack.org/459848]

ttx to communicate results of the 2017 contributor attrition stats
analysis he did

Additionally, we are still looking for a volunteer TC member
sponsor/mentor to helm the Gluon team navigate the OpenStack seas as
they engage to become an official project. Any volunteer ?


== Need for a TC meeting next Tuesday ==

I don't think anything has come up this week that would require a
specific IRC meeting to solve, so we won't be holding a meeting this week.

Thanks everyone!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [tripleo] Role updates

2017-06-12 Thread Dmitry Tantsur

On 06/09/2017 05:24 PM, Alex Schultz wrote:

Hey folks,

I wanted to bring to your attention that we've merged the change[0] to
add a basic set of roles that can be combined to create your own
roles_data.yaml as needed.  With this change the roles_data.yaml and
roles_data_undercloud.yaml files in THT should not be changed by hand.
Instead if you have an update to a role, please update the appropriate
roles/*.yaml file. I have proposed a change[1] to THT with additional
tools to validate that the roles/*.yaml files are updated and that
there are no unaccounted for roles_data.yaml changes.  Additionally
this change adds in a new tox target to assist in the generate of
these basic roles data files that we provide.

Ideally I would like to get rid of the roles_data.yaml and
roles_data_undercloud.yaml so that the end user doesn't have to
generate this file at all but that won't happen this cycle.  In the
mean time, additional documentation around how to work with roles has
been added to the roles README[2].


Hi, this is awesome! Do we expect more example roles to be added? E.g. I could 
add a role for a reference Ironic Conductor node.




Thanks,
-Alex

[0] https://review.openstack.org/#/c/445687/
[1] https://review.openstack.org/#/c/472731/
[2] 
https://github.com/openstack/tripleo-heat-templates/blob/master/roles/README.rst

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Is Routes==2.3.1 a binary only package or something?

2017-06-12 Thread Michael Still
Hi,

I'm trying to explain this behaviour in stable/newton, which specifies
Routes==2.3.1 in upper-constraints:

$ pip install --no-binary :all: Routes==2.3.1
...
  Could not find a version that satisfies the requirement Routes==2.3.1
(from versions: 1.5, 1.5.1, 1.5.2, 1.6, 1.6.1, 1.6.2, 1.6.3, 1.7, 1.7.1,
1.7.2, 1.7.3, 1.8, 1.9, 1.9.1, 1.9.2, 1.10, 1.10.1, 1.10.2, 1.10.3, 1.11,
1.12, 1.12.1, 1.12.3, 1.13, 2.0, 2.1, 2.2, 2.3, 2.4.1)
Cleaning up...
No matching distribution found for Routes==2.3.1

There is definitely a 2.3.1 on pip:

$ pip install Routes==2.3.1
...
Successfully installed Routes-2.3.1 repoze.lru-0.6 six-1.10.0

This implies to me that perhaps Routes version 2.3.1 is a binary-only
release and that stable/newton is therefore broken for people who don't
like binary packages (in my case because they're building an install image
for an architecture which doesn't match their host architecture).

Am I confused? I'd love to be enlightened.

Michael

-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telemetry][ceilometer][opendaylight][networking-odl] OpenDaylight Driver for Ceilometer

2017-06-12 Thread Deepthi V V
Hi,

We plan to propose a ceilometer driver for collecting network statistics 
information from OpenDaylight. We were thinking if we could have the driver 
code residing in networking-odl project instead of Ceilometer project. The 
thought is we have OpenDaylight depended code restricted to n-odl repo. Please 
let us know your thoughts on the same.

Thanks,
Deepthi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-12 Thread Bogdan Dobrelya
On 09.06.2017 18:51, Flavio Percoco wrote:
> 
> 
> On Fri, Jun 9, 2017 at 8:07 AM Doug Hellmann  > wrote:
> 
> Excerpts from Flavio Percoco's message of 2017-06-08 22:28:05 +:
> 
> > Unless I'm missing something, to use confd with an OpenStack
> deployment on
> > k8s, we'll have to do something like this:
> >
> > * Deploy confd in every node where we may want to run a pod (basically
> > wvery node)
> 
> Oh, no, no. That's not how it works at all.
> 
> confd runs *inside* the containers. It's input files and command line
> arguments tell it how to watch for the settings to be used just for that
> one container instance. It does all of its work (reading templates,
> watching settings, HUPing services, etc.) from inside the container.
> 
> The only inputs confd needs from outside of the container are the
> connection information to get to etcd. Everything else can be put
> in the system package for the application.
> 
> 
> A-ha, ok! I figured this was another option. In this case I guess we
> would have 2 options:
> 
> 1. Run confd + openstack service in side the container. My concern in
> this case
> would be that we'd have to run 2 services inside the container and structure
> things in a way we can monitor both services and make sure they are both
> running. Nothing impossible but one more thing to do.
> 
> 2. Run confd `-onetime` and then run the openstack service.
> 

A sidecar confd container running in a shared pod, which is having a
shared PID namespace with the managed service, would look much more
containerish. So confd could still HUP the service or signal it to be
restarted w/o baking itself into the container image. We have to deal
with the Pod abstraction as we want to be prepared for future
integration with k8s.

> 
> Either would work but #2 means we won't have config files monitored and the
> container would have to be restarted to update the config files.
> 
> Thanks, Doug.
> Flavio
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Goodbye^W See you later

2017-06-12 Thread Thierry Carrez
Jim Rollenhagen wrote:
> I've been mostly missing for the past six weeks while looking for a new
> job, so maybe you've forgotten me already, maybe not. I'm happy to tell
> you I've found one that I think is a great opportunity for me. But, I'm
> sad to tell you that it's totally outside of the OpenStack community.

You mean there are great opportunities /outside/ the OpenStack
community? Blasphemy!

More seriously, you will be missed. Also, you can come back anytime! We
recently saw a rise in organizations using OpenStack that want to get
involved upstream, and your awesome mix of development and operational
experience would be a boon to them ;)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Stepping down from core

2017-06-12 Thread ChangBo Guo
Roman,

Thanks for your contributions for oslo.db, hope we can have cross path
again in the futrue.

Best wishes!


2017-06-11 23:17 GMT+08:00 Doug Hellmann :

> Excerpts from Roman Podoliaka's message of 2017-06-11 17:32:49 +0300:
> > Hi all,
> >
> > I recently changed job and hasn't been able to devote as much time to
> > oslo.db as it is expected from a core reviewer. I'm no longer working
> > on OpenStack, so you won't see me around much.
> >
> > Anyway, it's been an amazing experience to work with all of you! Best
> > of luck! And see ya at various PyCon's around the world! ;)
> >
> > Thanks,
> > Roman
> >
>
> Thanks for your help launching oslo.db and making it so useful,
> Roman.  We'll miss your contributions!
>
> Good luck with your new job,
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstackclient][zaqar]ensure name convention about the new commands in zaqarclient

2017-06-12 Thread hao wang
Hi!

Zaqar now have  a lot of commands in openstackclient, but some of them don't
have a consistent command name.

For example:

queue_list = zaqarclient.queues.v2.cli:ListQueues
queue_create = zaqarclient.queues.v2.cli:CreateQueue
queue_delete = zaqarclient.queues.v2.cli:DeleteQueue
queue_stats = zaqarclient.queues.v2.cli:GetQueueStats
queue_set_metadata = zaqarclient.queues.v2.cli:SetQueueMetadata
queue_get_metadata = zaqarclient.queues.v2.cli:GetQueueMetadata
queue_purge = zaqarclient.queues.v2.cli:PurgeQueue
pool_create = zaqarclient.queues.v2.cli:CreatePool
pool_show = zaqarclient.queues.v2.cli:ShowPool
pool_update = zaqarclient.queues.v2.cli:UpdatePool
pool_delete = zaqarclient.queues.v2.cli:DeletePool
pool_list = zaqarclient.queues.v2.cli:ListPools
messaging_flavor_list = zaqarclient.queues.v2.cli:ListFlavors
messaging_flavor_delete = zaqarclient.queues.v2.cli:DeleteFlavor
messaging_flavor_update = zaqarclient.queues.v2.cli:UpdateFlavor
messaging_flavor_show = zaqarclient.queues.v2.cli:ShowFlavor
messaging_flavor_create = zaqarclient.queues.v2.cli:CreateFlavor

So Zaqar propose to change all commands to have a consistent format for naming:

openstack messaging xxx

Zaqar will mark those old commands as deprecated and will remove them
after Queen release[1].

But I want to ensure this change aligns with the openstack community
strategy about the client command name convention, so I need some help
from openstackclient guys to see if this "openstack messaging xxx"
change is ok.

Thanks.

[1]:https://review.openstack.org/#/c/470201/4

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstackclient][zaqar]

2017-06-12 Thread hao wang
Hi!

Zaqar now have  a lot of commands in openstackclient, but some of them don't
have a consistent command name.

For example:

queue_list = zaqarclient.queues.v2.cli:ListQueues
queue_create = zaqarclient.queues.v2.cli:CreateQueue
queue_delete = zaqarclient.queues.v2.cli:DeleteQueue
queue_stats = zaqarclient.queues.v2.cli:GetQueueStats
queue_set_metadata = zaqarclient.queues.v2.cli:SetQueueMetadata
queue_get_metadata = zaqarclient.queues.v2.cli:GetQueueMetadata
queue_purge = zaqarclient.queues.v2.cli:PurgeQueue
pool_create = zaqarclient.queues.v2.cli:CreatePool
pool_show = zaqarclient.queues.v2.cli:ShowPool
pool_update = zaqarclient.queues.v2.cli:UpdatePool
pool_delete = zaqarclient.queues.v2.cli:DeletePool
pool_list = zaqarclient.queues.v2.cli:ListPools
messaging_flavor_list = zaqarclient.queues.v2.cli:ListFlavors
messaging_flavor_delete = zaqarclient.queues.v2.cli:DeleteFlavor
messaging_flavor_update = zaqarclient.queues.v2.cli:UpdateFlavor
messaging_flavor_show = zaqarclient.queues.v2.cli:ShowFlavor
messaging_flavor_create = zaqarclient.queues.v2.cli:CreateFlavor

So Zaqar propose to change all commands to have a consistent format for naming:

openstack messaging xxx

Zaqar will mark those old commands as deprecated and will remove them
after Queen release[1].

But I want to ensure this change aligns with the openstack community
strategy about the client command name convention, so I need some help
from openstackclient guys to see if this "openstack messaging xxx"
change is ok.

Thanks.

[1]:https://review.openstack.org/#/c/470201/4

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev